[Storage-research-list] ACM/SPEC ICPE 2018 Call for Workshops, Posters/Demos, and Wip/Vision Papers
wu huaming
huaming.wu at fu-berlin.de
Wed Jan 3 07:04:25 EST 2018
----------------------------------------------------------------------------------------------------------------------------
JOINT CALL FOR WORKSHOP PAPERS, POSTERS/DEMOS, AND WIP/VISION PAPERS
ICPE 2018
9th ACM/SPEC International Conference on Performance Engineering
A Joint Meeting of WOSP/SIPEW sponsored by
ACM SIGMETRICS and ACM SIGSOFT in Cooperation with SPEC
Berlin, Germany
April, 9-13, 2018
https://icpe2018.spec.org
--------------------------------------------------------------------------------------------------------------------------------
The goal of the International Conference on Performance Engineering
(ICPE) is to integrate theory and practice in the field of performance
engineering by providing a forum for sharing ideas and experiences
between industry and academia. ICPE is a joint meeting of the ACM
Workshop on Software and Performance (WOSP) and the SPEC International
Performance Evaluation Workshop (SIPEW). The conference brings together
researchers and industry practitioners to share and present their
experience, to discuss challenges, and to report state-of-the-art and
in-progress research on performance engineering.
------------------------------------------------------------------------
POSTERS, DEMOS, AND WIP/VISION PAPERS
The following ICPE 2018 tracks with the given submission deadlines are
still open:
- Poster and Demo Papers
Submission: Jan 15, 2018
Notification: Jan 26, 2018
Camera-ready paper submission: Feb 14, 2018
- Work in Progress and Vision Papers
Submission: Jan 10, 2018
Notification: Feb 08, 2018
Camera-ready paper submission: Feb 19, 2018
See https://icpe2018.spec.org/call-for-contributions/ for details.
------------------------------------------------------------------------
WORKSHOPS
The following workshops are organized in conjunction with ICPE 2017:
- 4th Workshop on Performance Analysis of Big data Systems (PABS)
- 1st Workshop on Hot Topics in Cloud Computing Performance (HotCloudPerf-2018)
- Workshop on Challenges in Performance Methods for Software Development (WOSP-C'18)
- 7th International Workshop on Load Testing and Benchmarking of Software Systems (LTB 2018)
- 4th International Workshop on Energy-aware Simulation and Modelling (ENERGY-SIM)
- 4th International Workshop on Quality-Aware DevOps (QUDOS ‘18)
A brief overview of all workshops is given below. For more information,
visit the conference website at https://icpe2018.spec.org/workshops
------------------------------------------------------------------------
4th Workshop on Performance Analysis of Big data Systems (PABS)
ABSTRACT
We are seeing exponential growth in data generated from various platforms like social media, multimedia, enterprises, internet of things etc. It is becoming increasingly difficult to manage, analyze, visualize, model, store, search big data systems.
However, we also witness growth in the complexity, diversity, number of deployments and capabilities of big data processing systems such as Map-Reduce, Spark, HBase, Hive, Cassandra, Big Table, Pregel and Mongo DB. The big data system may use new operating system designs, advanced data processing algorithms, parallelization of application, high performance computing architectures such as GPUs etc. and clusters to improve the performance. Traditional systems are also upgrading themselves to co-locate with popular big data technologies. The workshop on performance analysis of big data systems (PABS) aims at providing a platform for scientific researchers, academicians and practitioners to discuss techniques, models, benchmarks, tools, case studies and experiences while dealing with performance issues in traditional and big data systems. The primary objective is to discuss performance bottlenecks and improvements during big data analysis using different paradigms, architectures and big data technologies. We propose to use this platform as an opportunity to discuss systems, architectures, tools, and optimization algorithms that are
parallel in nature and hence make use of advancements to improve the system performance. This workshop shall focus on the performance challenges imposed by big data systems and on the different state-of-the-art solutions proposed to overcome these challenges. The accepted papers shall be published in ACM proceedings and digital library.
WORKSHOP WEB SITE INCLUDING A DETAILED CALL FOR PAPERS:
http://ripsac.web2labs.net/pabs/
IMPORTANT DATES(Tentative):
Paper Submission deadline January 15, 2018
Author Notification February 12, 2018
Camera ready paper deadline February 18, 2018
Workshop date April 09, 2018
SUBMISSION:
Submissions describing original, unpublished recent results related to the workshop theme, up to 6 pages in ACM conference format can be submitted through EasyChair paper submission website https://easychair.org/conferences/?conf=pabs2018..
All submission will be accepted in pdf format only. While the preferred mode of submission is through the paper
submission website but in case of difficulty in online submission, authors may also submit their
manuscripts over email to the workshop co-chairs.
ORGANIZERS (CHAIRS):
Rekha Singhal (TCS Research)
Dheeraj Chahal (TCS Research)
------------------------------------------------------------------------
1st Workshop on Hot Topics in Cloud Computing Performance (HotCloudPerf-2018)
ABSTRACT
Cloud computing is emerging as one of the most profound changes in the way we build and use IT. The use of global services in public clouds is increasing, and the lucrative and rapidly growing global cloud market already supports over 1 million IT-related jobs. However, it is currently challenging to make the IT services offered by public and private clouds performant (in an extended sense) and efficient. Emerging architectures, techniques, and real-world systems include hybrid deployment, serverless operation, everything as a service, complex workflows, auto-scaling and -tiering, etc. It is unclear to which extent traditional performance engineering, software engineering, and system design and analysis tools can help with understanding and engineering these emerging technologies. The community also needs practical tools and powerful methods to address hot topics in cloud computing performance.
Responding to this need, the HotCloudPerf workshop proposes a meeting venue for academics and practitioners, from experts to trainees, in the field of cloud computing performance. The workshop aims to engage this community, and to lead to the development of new methodological aspects for gaining deeper understanding not only of cloud performance, but also of cloud operation and behavior, through diverse quantitative evaluation tools, including benchmarks, metrics, and workload generators. The workshop focuses on novel cloud properties such as elasticity, performance isolation, dependability, and other non-functional system properties, in addition to classical performance-related metrics such as response time, throughput, scalability, and efficiency.
Each year, the workshop chooses a focus theme to explore; for 2018, the theme is “Performance in the cloud datacenter.” Articles focusing on this topic are particularly encouraged for HotCloudPerf-2018.
The HotCloudPerf workshop is technically sponsored by the Standard Performance Evaluation Corporation (SPEC)’s Research Group (RG), and is organized annually by the RG Cloud Group. HotCloudPerf has emerged from the series of yearly meetings organized by the RG Cloud Group in conjunction with ICPE, since 2013. The RG Cloud Group group is taking a broad approach, relevant for both academia and industry, to cloud benchmarking, quantitative evaluation, and experimental analysis.
WORKSHOP WEB SITE INCLUDING A DETAILED CALL FOR PAPERS:
https://hotcloudperf.spec.org/
IMPORTANT DATES
Abstracts due: Jan 10, 2018
Papers due: Jan 15, 2018
Author notification: Feb 07, 2018
Camera-ready deadline: Feb 18, 2018
Workshop: Apr 09, 2018
SUBMISSION
We solicit full papers (max. 6 pages, including floats and references), and short papers describing tools/demos/work-in-progress (2 pages). Articles must use the ACM conference format.
ORGANIZERS (CHAIRS):
Nikolas Herbst (U. Würzburg, Germany)
Alexandru Iosup (VU Amsterdam, the Netherlands)
Web & Publicity Chair:
Erwin van Eyk (Platform9 Fission team, USA, and TU Delft, the Netherlands)
------------------------------------------------------------------------
Workshop on Challenges in Performance Methods for Software Development (WOSP-C'18)
ABSTRACT
New challenges to assuring software performance arise as new software development methods emerge. In addition to using middleware and database platforms, new applications may be implemented using environments such as Software as a Service (SaaS) and Service-Oriented Architectures, which are also the key for cloud computing performance modelling. The performance characteristics of these services will inevitably influence the performance and scalability of the applications that use them. The use of DevOps means that new components will be introduced to existing systems while they are running. The new components must allow the performance existing components to continue to be met.
In this fourth edition of WOSP-C, we will explore the performance implications of this evolution in architecture and development and their impact on the inclusion and development of performance. We seek to do this by including research and experience papers, vision papers describing new initiatives and ideas, and discussion sessions. Papers describing new projects and approaches are particularly welcome. As implied by the title, the workshop focus is on methods usable anywhere across the life cycle, from requirements to design, testing and evolution of the product. The discussions will attempt to map the future of the field. They may occur in breakout sessions related to topics chosen by the participants. The discussions will be moderated and summaries posted on line for future reference. This is in keeping with the spirit of the first Workshop on Software and Performance, WOSP98, which successfully identified the issues that were current at the time. The acronym WOSP-C reflects this. There will be sessions which combine papers on research/experience/vision with substantial discussion on issues raised by the papers or the attendees. At least a third of the time will be devoted to discussion on identifying the key problems and the most fruitful lines of future research.
WORKSHOP WEBSITE INCLUDING A DETAILED CALL FOR PAPERS:
http://mifs.uib.cat/wosp-c-18/
IMPORTANT DATES:
Submission deadline: January 15, 2018
Notification to authors: February 8, 2018
Camera-ready copy: February 19, 2018
SUBMISSION:
6-page papers in ACM format, describing research results, experience, visions or new initiatives may be submitted via Easychair at https://easychair.org/conferences/?conf=wospc18.
Program Organizing Committee:
Catalina M. Lladó, Universitat de les Illes Balears, Spain, Chair
André B. Bondi Software Performance and Scalability Consulting LLC, USA
Davide Arcelli, University of L’Aquila
Olivia Das, Ryerson University, Canada
André van Hoorn, University of Stuttgart, Germany
Anne Koziolek, Karlsruhe Institute of Technology, Germany
Vittoria de Nitto Personè, Università di Roma Tor Vergata, Italy
Connie U. Smith, Performance Engineering Services, USA
Murray Woodside, Carleton University, Canada
------------------------------------------------------------------------
7th International Workshop on Load Testing and Benchmarking of Software Systems (LTB 2018)
Important Dates
ABSTRACT
Software systems (e.g., smartphone apps, desktop applications, e-commerce systems, IoT infrastructures, big data systems, and enterprise systems, etc.) have strict requirements on software performance. Failure to meet these requirements will cause customer dissatisfaction and negative news coverage. In addition to conventional functional testing, the performance of these systems must be verified through load testing or benchmarking to ensure quality service. Load testing examines the behavior of a system by simulating hundreds or thousands of users performing tasks at the same time. Benchmarking evaluates a system's performance and allows to optimize system configurations or compare the system with similar systems in the domain.
Load testing and benchmarking software systems are difficult tasks, which requires a great understanding of the system under test and customer behavior. Practitioners face many challenges such as tooling (choosing and implementing the testing tools), environments (software and hardware setup) and time (limited time to design, test, and analyze). This one-day workshop brings together software testing researchers, practitioners and tool developers to discuss the challenges and opportunities of conducting research on load testing and benchmarking software systems.
WORKSHOP WEB SITE INCLUDING A DETAILED CALL FOR PAPERS:
http://ltb2018.eecs.yorku.ca/
IMPORTANT DATES:
Research papers: Jan. 13, 2018
Presentation track: Mar. 2, 2018
Paper notification: Feb. 1, 2018
Presentation notification: Mar. 9, 2018
Workshop date: Apr. 9, 2018
SUBMISSION:
We solicit the following two tracks of submissions: research papers (maximum 4 pages) and presentation track for industry or experience talks (maximum 700 words extended abstract). Technical papers should follow the standard ACM SIG proceedings format and need to be submitted electronically via EasyChair. Short abstracts for the presentation track need to be submitted as "abstract only" submissions via EasyChair. Accepted technical papers will be published in the ICPE 2018 Proceedings. Materials from the presentation track will not be published in the ICPE 2018 proceedings, but will be made available on the workshop website. Submitted papers can be research papers, position papers, case studies or experience reports addressing issues including but not limited to the following:
Efficient and cost-effective test executions
Rapid and scalable analysis of the measurement results
Case studies and experience reports on load testing and benchmarking
Load testing and benchmarking on emerging systems (e.g., adaptive/autonomic systems, big data batch and stream processing systems, and cloud services)
Load testing and benchmarking in the context of agile software development process
Using performance models to support load testing and benchmarking
Building and maintaining load testing and benchmarking as a service
Efficient test data management for load testing and benchmarking
ORGANIZERS (CHAIRS):
Johannes Kroß fortiss GmbH, Germany
Cor-Paul Bezemer Queen's University, Canada
------------------------------------------------------------------------
4th International Workshop on Energy-aware Simulation (ENERGY-SIM’18)
ABSTRACT
The energy impact of IT infrastructures is a significant resource issue for many organisations. The Natural Resources Defence Council estimates that US data centers alone consumed 91 billion kilowatt-hours of electrical energy in 2013 – enough to power the households of New York twice-over – and this is estimated to grow to 139 billion kilowatt-hours by 2020. However, this is an underestimation as this figure fails to take into account other countries and all other computer usage. There are calls for reducing computer energy consumption to bring it in line with the amount of work being performed – so-called energy proportional computing. In order to achieve this we need to understand both where the energy is being consumed within a system and how modifications to such systems will affect the functionality (such as QoS) and the energy consumption. Monitoring and changing a live system is often not a practical solution. There are cost implications in doing so, and it normally requires significant time in order to fully ascertain the long-term trends. There is also the risk that any changes could lead to detrimental impacts, either in terms of the functionality of the system or in the energy consumed. This can lead to a situation where it is considered too risky to perform anything other than the most minor tweaks to a system. The use of modelling and simulation provides an alternative approach to evaluating where energy is being consumed, and assessing the impact of changes to the system. It also offers the potential for much faster turn-around and feedback, along with the ability to evaluate the impact of many different options simultaneously.
ENERGY-SIM 2018 seeks original work that is focused on addressing new research and development challenges, developing new techniques, and providing case studies, related to energy-aware simulation and modelling.
WORKSHOP WEBSITE INCLUDING A DETAILED CALL FOR PAPERS:
http://energy-sim.org/2018proposal
Important Dates
Abstract deadline: 8th January 2018
Paper deadline: 15th January 2018
Author notification: 4th February 2018
Camera ready deadline: 18th February 2018
SUBMISSION
Papers describing significant research contributions of theoretical and/or practical nature are being solicited for submission. Authors are invited to submit original, high-quality papers presenting new research related to energy-aware simulations.
The papers that are accepted and presented at the workshop will be published by ACM and disseminated through the ACM Digital Library. It is intended that the best papers will be put forward for a Journal special edition post workshop.
Submission will be made via EasyChair: https://easychair.org/conferences/?conf=energysim18
ORGANIZERS (CHAIRS):
General Co-Chair - Stephen McGough, Newcastle University, UK
General Co-Chair - Matthew Forshaw, Newcastle University, UK
Publicity Chair - Mehrgan Mostowfi, University of Northern Colorado, USA
------------------------------------------------------------------------
4th International Workshop on Quality-Aware DevOps (QUDOS 2018)
ABSTRACT
The QUDOS workshop provides a forum for experts from academia and
industry to present and discuss novel quality-aware methods, practices
and tools for DevOps.
DevOps extends the agile development principles to include the full
stack of software services, from design to execution, enabling and
promoting collaboration of operations, quality assurance, and
development engineers throughout the entire service lifecycle.
Ultimately, DevOps is a process that enables faster releases of a better
product to the end user. DevOps encompasses a set of values, principles,
methods, practices, and tools, to accelerate software delivery to the
customer by means of infrastructure as code, continuous integration and
deployment, automated testing and monitoring, or new architectural
styles such as microservices.
Software engineering research mainly deals with the development aspects
of DevOps, focusing on development methods, practices, and tools,
leaving the quality assurance aspects of DevOps behind. Even though
development practices such as testing (at all levels) are instrumental
in producing quality software, they mostly deal with the functional
correctness, while quality assurance deals with a more broadly defined
concept of quality, of which functional correctness is just one
dimension. However, DevOps needs methods and tools that enable
systematic assessment, prediction, and management of software quality in
other dimensions as well, including performance, reliability, safety,
survivability, or cost of ownership.
The QUDOS workshop aims to provide a venue for advances in the state of
the art in DevOps quality assurance methods, practices, and tools. To
this end, the workshop brings together experts from both academia and
industry, working in diverse areas such as quality assurance, testing,
performance engineering, agile software engineering, and model-based
development, with the goal to identify, define, and disseminate novel
quality-aware approaches to DevOps.
Topics of interest include, but are not limited to:
* Foundations of quality assurance in DevOps:
Methodologies; integration with lifecycle management; automated
tool chains; architecture patterns; etc.
* Architectural issues in DevOps:
Scalability and capacity planning; scale-out architectures;
cloud-native application design; microservice-based architectures
* Quality assurance in the development phase:
Software models and requirements in early software development
phases; functional and non-functional testing; languages,
annotations and profiles for quality assurance; quality analysis,
verification and prediction; optimization-based architecture design;
etc.
* Quality assurance during operation:
Application performance monitoring; model-driven performance
measurement and benchmarking; feedback-based quality assurance;
capacity planning and forecasting; architectural improvements;
performance anti-pattern detection; traceability and versioning;
trace and log analysis; software regression and testing; performance
monitoring and analytics; etc.
* Continuous deployment and live experimentation:
CI and CD in DevOps; canary releases and partial rollouts; A/B
testing; performance and scalability testing via shadow launches
* Applications of DevOps:
Case Studies in cloud computing, Big Data, and IoT; standardization
and interoperability; novel application domains, etc.
* All other topics related to quality in DevOps and agile service
delivery models
WORKSHOP WEB SITE INCLUDING A DETAILED CALL FOR PAPERS:
http://2018.qudos-workshop.org/
IMPORTANT DATES:
Full paper submission deadline Jan 15, 2017 (AoE)
Tool paper submission deadline Jan 15, 2017 (AoE)
Paper notification Feb 09, 2018
Camera-ready deadline Feb 18, 2018
Workshop date April 10, 2018
SUBMISSION GUIDELINES
Authors are invited to submit original, unpublished papers that are not
being considered in another forum. We solicit full papers (max 6 pages)
and short tool papers (max 2 pages). All submissions must conform to the
ACM conference format. Each full paper submission will be reviewed by at
least three members of the program committee.
Papers should be submitted via EasyChair at:
https://easychair.org/conferences/?conf=qudos2018
At least one author of each accepted paper is required to attend the
workshop and present the paper. Presented papers will be published by
ACM and included in the ACM Digital Library.
ORGANIZERS (CHAIRS):
PC Chairs
Lubomír Bulej, Charles University, Czech Republic
Antonio Filieri, Imperial College London, United Kingdom
Workshop Chairs (Steering Committee)
Danilo Ardagna, Politecnico di Milano, Italy
Giuliano Casale, Imperial College London, UK
Andre van Hoorn, University of Stuttgart, Germany
Philipp Leitner, Chalmers | University of Gothenburg, Sweden
More information about the Storage-research-list
mailing list