IEEE International Conference on Computer Communications
10-13 May 2021 // Virtual Conference

The First International IEEE INFOCOM Workshop on Distributed Machine Learning and Fog Networks - Call for Papers

Call For Papers

The First International Workshop on Distributed Machine Learning and Fog Networks

Fog networking is emerging as an end-to-end architecture that aims to distribute computing, storage, control, and networking functions along the cloud-to-things continuum of nodes that exists between datacenters and end users. Fueled by the volumes of data generated by network devices, machine learning has attracted significant attention in fog computing systems, both for providing intelligent applications to end users and for optimizing the operation of wireless and wireline networks. Existing methodologies for distributing machine learning across a set of devices have typically been envisioned for scenarios where device communication and computation properties are homogeneous, and/or where devices are directly connected to an aggregation server. These assumptions often do not hold in contemporary fog network systems, however. This motivates a new paradigm of fog learning to distribute model training over networks in a network-aware manner, i.e., considering the structure of the topology among devices, the heterogeneity of node communication and computation capabilities, and the proximity of resource-limited to resource-abundant nodes to optimize training. It also motivates the development of novel machine learning techniques to optimize the operation of fog network systems, which must consider the short timescale variability in network state due to device mobility.

 

The International IEEE Workshop on Distributed Machine Learning and Fog Networks (FOGML) aims to bring together researchers, developers, and practitioners from academia and industry to innovate at the intersection of distributed machine learning and fog computing. This includes research efforts in developing machine learning methodologies both “for” and “over” networks along the cloud-to-things continuum. Specifically, we solicit research papers from areas including, but not limited to:

  • Congestion control, communication, and traffic optimization for distributed machine learning 
  • Efficient neural network training and design conducted on fog networks
  • Task scheduling and resource allocation for distributed machine learning
  • Incentive mechanisms for distributed learning
  • Impact of system characteristics, e.g., network topology, on training of machine learning models
  • Hierarchical and multi-layer federated learning for fog networks
  • Communication-efficient distributed learning, e.g., with quantization or sparsification
  • Reinforcement learning for signal design, beamforming, channel state estimation, and interference mitigation in wireless networks
  • Distributed machine learning for optimizing massive MIMO, mmWave, intelligent reflective surfaces, and other contemporary communication technologies
  • Collaborative/cooperative model learning over device-to-device communication structures
  • Efficient coding design for distributed machine learning
  • Privacy and security considerations in distributed learning over fog network systems
  • Intelligent device sampling methods for federated learning over heterogeneous networks
  • Testbeds and experimental results on distributed learning and fog networks
  • Green distributed learning algorithm design
  • Synergies between deep learning and network slicing in fog computing systems

Patrons

Student Travel Grant Sponsors