3rd Workshop on Distributed Machine Learning for the Intelligent Computing Continuum (DML-ICC) in conjuction with IEEE/ACM UCC 2023
Program
Monday, December 04, 202315:30 – 15:50 | FedGrid: Federated Model Aggregation via Grid Shiftin | Boris Kraychev, Ensiye Kiyamousavi and Ivan Koychev |
15:50 – 16:10 | Architecture-Based FedAvg for Vertical Federated Learning | Bruno Casella and Samuele Fonio |
16:10 – 16:30 | CCSF: Clustered Client Selection Framework for Federated Learning in non-IID Data | Aisaa Hadj Mohamed, Allan M. De Souza, Joahannes B. D. da Costa, Leandro A. Villas and Julio C. Dos Reis |
16:30 – 16:50 | Runtime Management of Artificial Intelligence Applications for Smart Eyewears | Abednego Wamuhindo Kambale, Hamta Sedghani, Federica Filippini, Giacomo Verticale and Danilo Ardagna |
16:50 – 17:10 | SPACE4AI-R: a Runtime Managament Tool for AI Applications Component Placement and Resources Scaling in Computing Continua | Federica Filippini, Hamta Sedghani and Danilo Ardagna |
17:10 – 17:30 | Distributed Edge Inference: an Experimental Study on Multiview Detection | Gianluca Mittone, Giulio Malenza, Robert Birke and Marco Aldinucci |
17:30 – 17:50 | FIGARO: reinForcement learnInG mAnagement acRoss the computing cOntinuum | Federica Filippini, Riccardo Cavadini, Danilo Ardagna, Riccardo Lancelloti, Gabriele Russo Russo, Valeria Cardellini and Francesco Lo Presti |
17:50 – 18:10 | A lightweight, fully-distributed AI framework for energy-efficient resource allocation in LoRa networks | Antonio Scarvaglieri, Sergio Palazzo and Fabio Busacca |
18:10 – 18:30 | Comparison of Microservice Call Rate Predictions for Replication in the Cloud | Narges Mehram, Arman Haghighi, Pedram Aminharati, Nikolay Nikolov, Ahmet Soylu, Dumitru Roman and Radu Prodan |
Welcome
As the cloud extends to the fog and to the edge, computing services can be scattered over a set of computing resources that encompass users’ devices, the cloud, and intermediate computing infrastructure deployed in between. Moreover, increasing networking capacity promises lower delays in data transfers, enabling a continuum of computing capacity that can be used to process large amounts of data with reduced response times. Such large amounts of data are frequently processed through machine learning approaches, seeking to extract knowledge from raw data generated and consumed by a widely heterogeneous set of applications. Distributed machine learning has been evolving as a tool to run learning tasks also at the edge, often immediately after the data is produced, instead of transferring data to the centralized cloud for later aggregation and processing.
Following the successful previous editions DML-ICC 2021 and DML-ICC 2022, this third edition of DML-ICC keeps the aim to be a forum for discussion among researchers with a distributed machine learning background and researchers from parallel/distributed systems and computer networks. By bringing together these research topics, we look forward in building an Intelligent Computing Continuum, where distributed machine learning models can seamlessly run on any device from the edge to the cloud, creating a distributed computing system that is able to fulfill highly heterogeneous applications requirements and build knowledge from data generated by these applications.
Location
Taormina (Messina), ItalyImportant Dates
Paper submission: September 21, 2023 (EXTENDED)
Notification to Authors: 15 October, 2023 (updated)
Camera ready submission: 31 October, 2023 (updated)
Workshop date: Dec. 04 2023
UCC Conference dates: 4-7 December 2023
Topics
DML-ICC 2023 workshop aims to attract researchers from the machine learning community, especially the ones involved with distributed machine learning techniques, and researchers from the parallel/distributed computing communities. Together, these researchers will be able to build resource management mechanisms that are able to fulfill machine learning jobs requirements, but also use machine learning techniques to improve resource management in large distributed systems. Topics of interest include but are not limited to:
- Autonomic Computing in the Continuum
- Business and Cost Models for the Computing Continuum
- Complex Event Processing and Stream Processing
- Computing and Networking Slicing for the Continuum
- Distributed Machine Learning for Resource Management and Scheduling
- Distributed Machine Learning in the Computing Continuum
- Distributed Machine Learning applications
- Distribute Machine Learning performance evaluation
- Edge Intelligence models and architectures
- Federated Learning
- Intelligent Computing Continuum architectures and models
- Management of Distributed Learning Tasks
- Mobility support in the Computing Continuum
- Network management in the Computing Continuum
- Privacy using Distributed Learning
- Programming models for the Computing Continuum
- Resource management and Scheduling in the computing continuum
- Smart Environments (Smart Cities, Smart Buildings, Smart Industry, etc.)
- Theoretical Modeling for the Computing Continuum
Submission
Paper submission is double-blind electronic only. Authors should use the Easychair system. The DML-ICC workshop invites authors to submit original and unpublished work. Papers should not exceed 6 pages in ACM double-column format, including figures, tables, and references. Up to 2 additional pages might be purchased upon the approval of the proceedings chair.
All manuscripts will undergo a double-blind review process and will be reviewed and judged on correctness, originality,technical strength, rigour in analysis, quality of results, quality of presentation, and interest and relevance to the conference attendees. The submitted document should not include author information and should not include acknowledgements, citations or discussion of related work that would make the authorship apparent. Submissions containing author identifying information may be subject to rejection without review. You can enable the double-anonymous mode in the LaTeX template by adding the “anonymous” option (e.g., \documentclass[manuscript, anonymous, review]{acmart}). Upon acceptance, the author and affiliation information must be added to your paper. Your submission is subject to a determination that you are not under any sanctions by ACM.
At least one author of each paper must be registered for the conference in order for the paper to be published in the proceedings. The conference proceedings will be published by the ACM and made available online via the IEEE Xplore Digital Library and ACM Digital Library.
Submission requires the willingness of at least one of the authors to register and present the paper.
DML-ICC Workshop Honorary Chairs
Ian Foster, University of Chicago and Argonne National Laboratory, USA
Filip De Turck, Ghent University, Belgium
DML-ICC 2023 Co-Chairs
Marco Aldinucci, University of Torino, Italy
Luiz F. Bittencourt, University of Campinas, Brazil
Valeria Cardellini, University of Rome Tor-Vergata, Italy
Program Committee
Atakan Aral, University of Vienna, Austria
José Javier Berrocal-Olmeda, University of Extremadura, Spain
Robert Birke, University of Torino, Italy
Rodrigo Calheiros, Western Sydney University, Australia
Bruno Casella, University of Torino, Italy
Marilia Curado, University of Coimbra, Portugal
Ivana Dusparic, Trinity College Dublin, Ireland
Stefano Iannucci, University of Rome III, Italy
Carlos Kamienski, Federal University of ABC, Brazil
Wei Li, University of Sydney, Australia
Zoltán Mann, University of Amsterdam, Netherlands
Gianluca Mittone, University of Torino, Italy
Radu Prodan, University of Klagenfurt, Austria
Christian Esteve Rothenberg, University of Campinas, Brazil
Josef Spillner, Zurich University of Applied Sciences, Switzerland
Javid Taheri, Karlstad University, Sweden
Karima Velasquez, University of Coimbra, Portugal