Show simple item record

dc.contributor.authorBangachev, Kiril
dc.contributor.authorBresler, Guy
dc.contributor.authorTiegel, Stefan
dc.contributor.authorVaikuntanathan, Vinod
dc.date.accessioned2026-01-26T15:45:44Z
dc.date.available2026-01-26T15:45:44Z
dc.date.issued2025-06-15
dc.identifier.isbn979-8-4007-1510-5
dc.identifier.urihttps://hdl.handle.net/1721.1/164631
dc.descriptionSTOC ’25, Prague, Czechiaen_US
dc.description.abstractWe present a polynomial-time reduction from solving noisy linear equations over in dimension Θ(klogn/(logk,logq,loglogn)) with a uniformly random coefficient matrix to noisy linear equations over in dimension n where each row of the coefficient matrix has uniformly random support of size k. This allows us to deduce the hardness of sparse problems from their dense counterparts. In particular, we derive hardness results in the following canonical settings: • Assuming the ℓ-dimensional (dense) learning with errors () problem over a polynomial-size field takes time 2Ω(ℓ), k-sparse in dimension n takes time nΩ(k/(logk · (logk + loglogn))) . • Assuming the ℓ-dimensional (dense) learning parity with noise () problem over ℤ/2ℤ takes time 2Ω(ℓ/logℓ), k-sparse in dimension n takes time nΩ(k/(logk · (logk + loglogn)2)) . These running time lower bounds are nearly tight as both sparse problems can be solved in time nO(k), given sufficiently many samples. Our reduction allows us to derive several consequences in cryptography and the computational complexity of statistical problems. In addition, as a new application, we give a reduction from k-sparse LWE to noisy tensor completion. Concretely, composing the two reductions implies that order-k rank-2k−1 noisy tensor completion in ℝn⊗ k takes time nΩ(k/ logk · (logk + loglogn)), assuming the exponential hardness of standard worst-case lattice problems.en_US
dc.publisherACM|Proceedings of the 57th Annual ACM Symposium on Theory of Computingen_US
dc.relation.isversionofhttps://doi.org/10.1145/3717823.3718284en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleNear-Optimal Time-Sparsity Trade-Offs for Solving Noisy Linear Equationsen_US
dc.typeArticleen_US
dc.identifier.citationKiril Bangachev, Guy Bresler, Stefan Tiegel, and Vinod Vaikuntanathan. 2025. Near-Optimal Time-Sparsity Trade-Offs for Solving Noisy Linear Equations. In Proceedings of the 57th Annual ACM Symposium on Theory of Computing (STOC '25). Association for Computing Machinery, New York, NY, USA, 1910–1920.en_US
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.identifier.mitlicensePUBLISHER_POLICY
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2025-08-01T08:46:01Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2025-08-01T08:46:01Z
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record