dc.contributor.author | Poggio, Tomaso | |
dc.contributor.author | Kur, Gil | |
dc.contributor.author | Banburski, Andrzej | |
dc.date.accessioned | 2019-12-04T21:22:22Z | |
dc.date.available | 2019-12-04T21:22:22Z | |
dc.date.issued | 2019-12-04 | |
dc.identifier.uri | https://hdl.handle.net/1721.1/123108 | |
dc.description.abstract | In solving a system of n linear equations in d variables Ax=b, the condition number of the (n,d) matrix A measures how much errors in the data b affect the solution x. Bounds of this type are important in many inverse problems. An example is machine learning where the key task is to estimate an underlying function from a set of measurements at random points in a high dimensional space and where low sensitivity to error in the data is a requirement for good predictive performance. Here we report the simple observation that when the columns of A are random vectors, the condition number of A is highest, that is worse, when d=n, that is when the inverse of A exists. An overdetermined system (n>d) and especially an underdetermined system (n<d), for which the pseudoinverse must be used instead of the inverse, typically have significantly better, that is lower, condition numbers. Thus the condition number of A plotted as function of d shows a double descent behavior with a peak at d=n. | en_US |
dc.description.sponsorship | This material is based upon work supported by the Center for Brains, Minds and Machines (CBMM), funded by NSF STC award CCF-1231216. | en_US |
dc.language.iso | en_US | en_US |
dc.publisher | Center for Brains, Minds and Machines (CBMM) | en_US |
dc.relation.ispartofseries | CBMM Memo;102 | |
dc.title | Double descent in the condition number | en_US |
dc.type | Technical Report | en_US |
dc.type | Working Paper | en_US |
dc.type | Other | en_US |