Show simple item record

dc.contributor.authorGalanti, Tomer
dc.contributor.authorSiegel, Zachary
dc.contributor.authorGupte, Aparna
dc.contributor.authorPoggio, Tomaso
dc.date.accessioned2023-02-27T18:09:34Z
dc.date.available2023-02-27T18:09:34Z
dc.date.issued2023-02-14
dc.identifier.urihttps://hdl.handle.net/1721.1/148231
dc.description.abstractIn this paper, we study the bias of Stochastic Gradient Descent (SGD) to learn low-rank weight matrices when training deep ReLU neural networks. Our results show that training neural networks with mini-batch SGD and weight decay causes a bias towards rank minimization over the weight matri- ces. Specifically, we show, both theoretically and empirically, that this bias is more pronounced when using smaller batch sizes, higher learning rates, or increased weight decay. Additionally, we predict and observe empirically that weight decay is necessary to achieve this bias. Finally, we empirically investigate the connection between this bias and generalization, finding that it has a marginal effect on generalization. Our analysis is based on a minimal set of assumptions and applies to neural networks of any width or depth, including those with residual connections and convolutional layers.en_US
dc.description.sponsorshipThis material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216.en_US
dc.publisherCenter for Brains, Minds and Machines (CBMM)en_US
dc.relation.ispartofseriesCBMM Memo;140
dc.titleSGD and Weight Decay Provably Induce a Low-Rank Bias in Deep Neural Networksen_US
dc.typeArticleen_US
dc.typeTechnical Reporten_US
dc.typeWorking Paperen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record