On the Hardness of Sparsely Learning Parity with Noise

  • Di Yan
  • , Hanlin Liu
  • , Shuoyao Zhao
  • , Yu Yu*
  • *Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

The Learning Parity with Noise (LPN) problem represents the average-case analogue of the NP-Complete problem "decoding linear codes", and it has been extensively studied in learning theory, coding theory and cryptography with applications to quantum-resistant cryptographic schemes. However, LPN also suffers from large public key size which is the common drawback that hinders code-based cryptography from being practical. In this paper, we study a sparse variant of LPN whose public matrix consists of sparse vectors instead of following uniform distribution. We show a win-win argument that at least one of the following assumption is true: (i) either the hardness of sparse LPN is implied by that of the standard LPN under the same noise rate; (ii) or there exists new black-box constructions of public-key encryption schemes and oblivious transfer protocols from standard LPN. Since the second assumption relies on the infeasible noise regimes for LPN-based public-key cryptography, we believe that the first assumption is more likely to hold, i.e. sparse LPN is as hard as standard LPN. Finally, we give a (heuristic) method to further compress the sparse public matrix by evaluating pseudorandom functions with keys made public, whose security again resorts to the aforementioned win-win technique.

Original languageEnglish
Pages (from-to)1939-1947
Number of pages9
JournalComputer Journal
Volume65
Issue number8
DOIs
StatePublished - 1 Aug 2022
Externally publishedYes

Keywords

  • Learning Parity with Noise
  • Sparsely Learning Parity with Noise
  • oblivious transfer
  • privacy amplification
  • pseudorandom functions
  • public-key encryption
  • win-win argument

Fingerprint

Dive into the research topics of 'On the Hardness of Sparsely Learning Parity with Noise'. Together they form a unique fingerprint.

Cite this