Abstract
We revisit the Learning Sparse Parities with Noise (LSPN) problem on k out of n variables for k≪n, and present the following findings. 1. For true parity size k=nu for any 0<u<1, and noise rate η<1/2, the first algorithm solves the (n,k,η)-LSPN problem with constant probability and time/sample complexity [Formula presented]. 2. For any 1/2<c1<1, k=o(ηn/logn), and η≤n−c1/4, our second algorithm solves the (n,k,η)-LSPN problem with constant probability and time/sample complexity n2(1−c1+o(1))k. 3. We show a “win-win” result about reducing the number of samples. If there is an algorithm that solves (n,k,η)-LSPN problem with probability Ω(1), time/sample complexity nO(k) for k=o(n1−c), any noise rate η=n1−2c/3 and 1/2≤c<1. Then, either there exists an algorithm that solves the (n,k,μ)-LSPN problem under lower noise rate μ=n−c/3 using only 2n samples, or there exists an algorithm that solves the (n,k′,μ)-LSPN problem for a much larger k′=n1−c with probability n−O(k)/poly(n), and time complexity poly(n)⋅nO(k), using only n samples. Our algorithms are simple in concept by combining a few basic techniques such as majority voting, reduction from the LSPN problem to its decisional variant, Goldreich-Levin list decoding, and computational sample amplification.
| Original language | English |
|---|---|
| Pages (from-to) | 76-86 |
| Number of pages | 11 |
| Journal | Theoretical Computer Science |
| Volume | 873 |
| DOIs | |
| State | Published - 10 Jun 2021 |
| Externally published | Yes |
Keywords
- Algorithm analysis
- Learning parity with noise
- Learning sparse parity with noise
- Learning theory
- Sample amplification