An n-gram-based approach for detecting approximately duplicate database records

Zengping Tian, Hongjun Lu, Wenyun Ji, Aoying Zhou, Zhong Tian

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

Detecting and eliminating duplicate records is one of the major tasks for improving data quality. The task, however, is not as trivial as it seems since various errors, such as character insertion, deletion, transposition, substitution, and word switching, are often present in real-world databases. This paper presents an n-grambased approach for detecting duplicate records in large databases. Using the approach, records are first mapped to numbers based on the n-grams of their field values. The obtained numbers are then clustered, and records within a cluster are taken as potential duplicate records. Finally, record comparisons are performed within clusters to identify true duplicate records. The unique feature of this method is that it does not require preprocessing to correct syntactic or typographical errors in the source data in order to achieve high accuracy. Moreover, sorting the source data file is unnecessary. Only a fixed number of database scans is required. Therefore, compared with previous methods, the algorithm is more time efficient.

Original languageEnglish
Pages (from-to)325-331
Number of pages7
JournalInternational Journal on Digital Libraries
Volume3
Issue number4
DOIs
StatePublished - 2000
Externally publishedYes

Keywords

  • Data quality
  • Duplicate elimination
  • Edit distance
  • N-gram

Fingerprint

Dive into the research topics of 'An n-gram-based approach for detecting approximately duplicate database records'. Together they form a unique fingerprint.

Cite this