On the hardness of sparsely learning parity with noise

Hanlin Liu, Di Yan, Yu Yu, Shuoyao Zhao

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

Learning Parity with Noise (LPN) represents the average-case analogue of the NP-Complete problem “decoding random linear codes”, and it has been extensively studied in learning theory and cryptography with applications to quantum-resistant cryptographic schemes. In this paper, we study a sparse variant of the LPN whose public matrix consists of sparse vectors (or alternatively each element of the matrix follows the Bernoulli distribution), of which the variant considered by Benny, Boaz and Avi (STOC 2010) falls into a (extreme) special case. We show a win-win argument that at least one of the following is true: (1) either the hardness of sparse LPN is implied by that of the standard LPN under the same noise rate; (2) there exist new black-box constructions of public-key encryption (PKE) schemes and oblivious transfer (OT) protocols from the standard LPN.

Original languageEnglish
Title of host publicationProvable Security - 11th International Conference, ProvSec 2017, Proceedings
EditorsTatsuaki Okamoto, Yong Yu, Man Ho Au, Yannan Li
PublisherSpringer Verlag
Pages261-267
Number of pages7
ISBN (Print)9783319686363
DOIs
StatePublished - 2017
Externally publishedYes
Event11th International Conference on Provable Security, ProvSec 2017 - Xi'an, China
Duration: 23 Oct 201725 Oct 2017

Publication series

NameLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Volume10592 LNCS
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Conference

Conference11th International Conference on Provable Security, ProvSec 2017
Country/TerritoryChina
CityXi'an
Period23/10/1725/10/17

Fingerprint

Dive into the research topics of 'On the hardness of sparsely learning parity with noise'. Together they form a unique fingerprint.

Cite this