| dc.contributor.author | Bangachev, Kiril | |
| dc.contributor.author | Bresler, Guy | |
| dc.contributor.author | Tiegel, Stefan | |
| dc.contributor.author | Vaikuntanathan, Vinod | |
| dc.date.accessioned | 2026-01-26T15:45:44Z | |
| dc.date.available | 2026-01-26T15:45:44Z | |
| dc.date.issued | 2025-06-15 | |
| dc.identifier.isbn | 979-8-4007-1510-5 | |
| dc.identifier.uri | https://hdl.handle.net/1721.1/164631 | |
| dc.description | STOC ’25, Prague, Czechia | en_US |
| dc.description.abstract | We present a polynomial-time reduction from solving noisy linear equations over in dimension Θ(klogn/(logk,logq,loglogn)) with a uniformly random coefficient matrix to noisy linear equations over in dimension n where each row of the coefficient matrix has uniformly random support of size k. This allows us to deduce the hardness of sparse problems from their dense counterparts. In particular, we derive hardness results in the following canonical settings:
• Assuming the ℓ-dimensional (dense) learning with errors () problem over a polynomial-size field takes time 2Ω(ℓ), k-sparse in dimension n takes time nΩ(k/(logk · (logk + loglogn))) .
• Assuming the ℓ-dimensional (dense) learning parity with noise () problem over ℤ/2ℤ takes time 2Ω(ℓ/logℓ), k-sparse in dimension n takes time nΩ(k/(logk · (logk + loglogn)2)) .
These running time lower bounds are nearly tight as both sparse problems can be solved in time nO(k), given sufficiently many samples.
Our reduction allows us to derive several consequences in cryptography and the computational complexity of statistical problems. In addition, as a new application, we give a reduction from k-sparse LWE to noisy tensor completion. Concretely, composing the two reductions implies that order-k rank-2k−1 noisy tensor completion in ℝn⊗ k takes time nΩ(k/ logk · (logk + loglogn)), assuming the exponential hardness of standard worst-case lattice problems. | en_US |
| dc.publisher | ACM|Proceedings of the 57th Annual ACM Symposium on Theory of Computing | en_US |
| dc.relation.isversionof | https://doi.org/10.1145/3717823.3718284 | en_US |
| dc.rights | Creative Commons Attribution | en_US |
| dc.rights.uri | https://creativecommons.org/licenses/by/4.0/ | en_US |
| dc.source | Association for Computing Machinery | en_US |
| dc.title | Near-Optimal Time-Sparsity Trade-Offs for Solving Noisy Linear Equations | en_US |
| dc.type | Article | en_US |
| dc.identifier.citation | Kiril Bangachev, Guy Bresler, Stefan Tiegel, and Vinod Vaikuntanathan. 2025. Near-Optimal Time-Sparsity Trade-Offs for Solving Noisy Linear Equations. In Proceedings of the 57th Annual ACM Symposium on Theory of Computing (STOC '25). Association for Computing Machinery, New York, NY, USA, 1910–1920. | en_US |
| dc.contributor.department | Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science | en_US |
| dc.identifier.mitlicense | PUBLISHER_POLICY | |
| dc.eprint.version | Final published version | en_US |
| dc.type.uri | http://purl.org/eprint/type/ConferencePaper | en_US |
| eprint.status | http://purl.org/eprint/status/NonPeerReviewed | en_US |
| dc.date.updated | 2025-08-01T08:46:01Z | |
| dc.language.rfc3066 | en | |
| dc.rights.holder | The author(s) | |
| dspace.date.submission | 2025-08-01T08:46:01Z | |
| mit.license | PUBLISHER_CC | |
| mit.metadata.status | Authority Work and Publication Information Needed | en_US |