Smoothing Out Binary Linear Codes and Worst-Case Sub-exponential Hardness for LPN

  • Yu Yu*
  • , Jiang Zhang
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

8 Scopus citations

Abstract

Learning parity with noise (LPN) is a notorious (average-case) hard problem that has been well studied in learning theory, coding theory and cryptography since the early 90’s. It further inspires the Learning with Errors (LWE) problem [Regev, STOC 2005], which has become one of the central building blocks for post-quantum cryptography and advanced cryptographic primitives. Unlike LWE whose hardness can be reducible from worst-case lattice problems, no corresponding worst-case hardness results were known for LPN until very recently. At Eurocrypt 2019, Brakerski et al. [BLVW19] established the first feasibility result that the worst-case hardness of nearest codeword problem (NCP) (on balanced linear code) at the extremely low noise rate log2nn implies the quasi-polynomial hardness of LPN at the high noise rate 1 / 2 - 1 / poly(n). It remained open whether a worst-case to average-case reduction can be established for standard (constant-noise) LPN, ideally with sub-exponential hardness. We start with a simple observation that the hardness of high-noise LPN over large fields is implied by that of the LWE of the same modulus, and is thus reducible from worst-case hardness of lattice problems. We then revisit [BLVW19], which is the main focus of this work. We first expand the underlying binary linear codes (of the NCP) to not only the balanced code considered in [BLVW19] but also to another code (with a minimum dual distance). At the core of our reduction is a new variant of smoothing lemma (for both binary codes) that circumvents the barriers (inherent in the underlying worst-case randomness extraction) and admits tradeoffs for a wider spectrum of parameter choices. In addition to similar worst-case hardness result obtained in [BLVW19], we show that for any constant 0 < c< 1 the constant-noise LPN problem is (T=2Ω(n1-c),ϵ=2-Ω(nmin(c,1-c)),q=2Ω(nmin(c,1-c)) )-hard assuming that the NCP at the low-noise rate τ= n-c is (T= 2 Ω(τn), ϵ= 2 -Ω(τn), m= 2 Ω(τn) )-hard in the worst case, where T, ϵ, q and m are time complexity, success rate, sample complexity, and codeword length respectively. Moreover, refuting the worst-case hardness assumption would imply arbitrary polynomial speedups over the current state-of-the-art algorithms for solving the NCP (and LPN), which is a win-win result. Unfortunately, public-key encryptions and collision resistant hash functions need constant-noise LPN with (T=2ω(n), ϵ′=2-ω(n), q=2n )-hardness (Yu et al. CRYPTO 2016 & ASIACRYPT 2019), which is almost (up to an arbitrary ω(1 ) factor in the exponent) what is reducible from the worst-case NCP when c= 0.5. We leave it as an open problem whether the gap can be closed or there is a separation in place.

Original languageEnglish
Title of host publicationAdvances in Cryptology – CRYPTO 2021 - 41st Annual International Cryptology Conference, CRYPTO 2021, Proceedings
EditorsTal Malkin, Chris Peikert
PublisherSpringer Science and Business Media Deutschland GmbH
Pages473-501
Number of pages29
ISBN (Print)9783030842512
DOIs
StatePublished - 2021
Externally publishedYes
Event41st Annual International Cryptology Conference, CRYPTO 2021 - Virtual, Online
Duration: 16 Aug 202120 Aug 2021

Publication series

NameLecture Notes in Computer Science
Volume12827 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference41st Annual International Cryptology Conference, CRYPTO 2021
CityVirtual, Online
Period16/08/2120/08/21

Keywords

  • Foundations of cryptography
  • Learning parity with noise
  • Smoothing lemma
  • Worst-case to average-case reduction

Fingerprint

Dive into the research topics of 'Smoothing Out Binary Linear Codes and Worst-Case Sub-exponential Hardness for LPN'. Together they form a unique fingerprint.

Cite this