CEC: Customized Edge Computing

Research in advancing the realm of Customized Edge Computing (CEC) envisions bringing computing capacity to the edge and adds the need for decentralized data management for data acquisition, processing, routing, storage and streaming [Zhao et al., 2019]. To accomplish that, virtualization technologies and tools should be brought to the edge in a way that enables customized support for applications. Moreover, applications developed for the edge, i.e. edge native applications, should be able to have their components split to run at different levels of the computing infrastructure: from the device itself to the cloud, passing through the edge [Bittencourt et al., 2017]. Interoperable management and support for lightweight virtualization and cloud-based virtualization are needed at programming languages and application programming interfaces. An intelligent edge management will rely on such features and tools to be able to customize edge computing resources and dynamically scatter data and processes in an efficient way. The distributed compute/network infrastructure currently being deployed 5G will over time evolve into B5G and 6G [Lovén et al., 2019]. On that path, leveraging the distributed infrastructure to support not only data and distributed deployment of telco functions but also the full distributed lifecycle deployment and operation of third-party as well as end-user applications. Performance and behavior in both networking and computing should adapt according to dynamic application load scenarios and requirements. In SMARTNESS, we aim to design intelligent, customized edge computing solutions that can autonomously adapt resource management to comply, in real-time, with the changing behavior of devices producing and consuming data at the edge. Realizing Customized Edge Clouds will involve investigating the following topics:

  • Lightweight Cloud and Edge Native Architectures: Defining how different computing layers can collaborate to support edge native applications to run seamlessly throughout the computing infrastructure, which includes using computing and telecommunication devices to in-situ and in-transit data processing with localized services, stream processing, and AI applications [Lovén et al., 2019]. Investigate how network and computing virtualization [Morabito et al., 2015] can be composed to integrate an interoperable continuum of computing and communication to support just-in-time customization of the infrastructure to the changing application load/requirements scenarios.
  • Serverless computing: Implementation of serverless computing used for adaptively deploying network functions to adapt the edge to the changing applications dynamics. Investigate hardware acceleration applied to serverless computing for learning loads at the edge. Leverage novel computing paradigms such as Deviceless Edge Computing that extends the serverless paradigm to the network edge [Glikson et al., 2017]. 
  • Network Service Mesh (NSM): Refers to novel approaches to realize complicated L2/L3 use cases in virtualized environments (triggered by lightweight virtualization solutions like Kubernetes) that are challenging to be addressed with existing virtual networking models. In the case of Kubernetes as the container-based virtualization technology, NSM adds properties such as: (i) heterogeneous network configurations, (ii) custom protocols, (iii) tunneling as a first-class citizen, (iv) networking context as a first-class citizen, (v) policy-driven service function chaining (SFC), (vi) on-demand, dynamic, negotiated connections.
  • Decentralized management and Learning: With the dynamic and heterogeneous applications behavior, orchestration of resources and applications should run distributed with optimal local decision-making. An adaptive and autonomous configuration of management entities is needed to orchestrate virtualized services at the edge and the cloud [Zhao et al., 2019]. Investigate how distributed learning mechanisms, such as federated learning, can be applied to support edge intelligence and customization to generate knowledge in collaboration with centralized, heavier learning algorithms.
  • Edge Hardware acceleration: Investigate how hardware acceleration, e.g. with specific purpose processors, can improve processing for network and other workloads at edge infrastructures [Fahmy et al., 2015].
  • Non-telco workloads: co-existence of telco and non-telco workloads will require exposing network capabilities to non-telco workloads. Managing the co-existence of workloads will require advanced adaptive and isolation mechanisms, including control structures to allow the infrastructure owner ensuring that resources are used optimally and cost efficiently. This also includes the possibility to monetize the assets.