Investigating robustness of biological vs. backprop based learning
Published in ICASSP, 2022
Robustness of learning algorithms remains an important prob- lem to be solved from both the perspective of adversarial at- tacks and improving generalization. In this work, we inves- tigate the robustness of biologically inspired Hebbian learn- ing algorithm in depth. We find that Hebbian learning based algorithms outperform conventional learning algorithms like CNNs by a huge margin of upto 18% on the CIFAR-10 dataset under the addition of noise. We highlight that an important reason for this is the underlying representations that are being learnt by the learning algorithms. Specifically, we find that the Hebbian method learns the most robust representations compared to other methods that helps it to generalize better. We also conduct ablations on the Hebbian network and show- case that robustness of the model drops by upto 16% on the CIFAR-10 dataset if the representation capacity of the net- work is deteriorated. Hence, we find that the representations learnt play an important role in the resultant robustness of the models. We conduct experiments on multiple datasets and show that the results hold on all the datasets and at various noise levels.
Recommended citation: Zhou Y, Wang M, Gupta M, et al. Investigating robustness of biological vs. backprop based learning[C]//ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2022: 3533-3537.