March 17, 2021

Abstract S5

S5: Randomization in Deep Learning

Claudio Gallicchio

University of Pisa, Italy

Massimo Panella

La Sapienza University of Rome, Italy

Ponnuthurai Nagaratnam Suganthan

Nanyang Technological University, Singapore

Abstract

Deep Learning (DL) achieved tremendous applicative success over the last decade. Within this restless development, the line of research involving randomization in the design of DL algorithms is getting more and more attention in the community. A popular entry point in this regard consists of deep neural architectures in which the internal hidden layers are based on randomized weights and training involves only a limited number of parameters, typically those in the readout function. This approach has the unquestionable advantage of computational efficiency and amenability to edge implementations in low-powerful devices. Relevantly, randomization can enter the design and analysis of DL algorithms in several other ways, e.g., in training algorithms that try to overcome the limitations of gradient back-propagation, in the mathematical analysis of deep neural architectures with Gaussian weights, in approaches related to deep neural architecture search, and in physical implementations of neural networks in neuromorphic hardware. While the current developments in the field are starting to impact the literature in a substantial way, techniques based on randomization in DL still need to make the leap towards production and popularization that will allow their advantages to be appreciated in a concrete and direct way by the next generation of researchers and practitioners.

This session intends to offer the ideal context for dissemination and cross-pollination of ideas in the areas of randomization in DL. Accordingly, this session calls for contributions exploiting the synergies between randomization and Deep Learning algorithms from all sides, including both application-oriented studies as well as more theoretical advancements. A list of relevant topics for this special session includes, without being limited to, the following:

  • Deep Random-weights Neural Networks, e.g. Random Vector Functional Links (RVFLs), Reservoir Computing (RC), Echo State Networks (ESNs), Liquid State Machines (LSMs)
  • Synergies between randomized Neural Networks, Kernel Methods and Deep Gaussian Processes
  • Randomization in Convolutional Neural Networks, Transformers and Attention Mechanisms
  • Novel applications of deep Randomized Neural Networks
  • Continual and Federated Learning in randomization-based DL models
  • The lottery ticket hypothesis and Neural Architecture Search
  • Alternatives to back-propagation algorithms (e.g., Direct Feedback Alignment)
  • Efficient implementations in popular Machine Learning libraries, including JAX, PyTorch, TensorFlow, Keras
  • Hardware implementations of randomized DL concepts