Search⌘ K
AI Features

K-Nearest Embeddings Blocking

Explore how k-nearest embeddings blocking enhances entity resolution by selecting candidate pairs via semantic vector similarity. Understand preprocessing, embedding text with SBERT, and using LanceDB for efficient indexing. Learn to balance recall and computational cost in large datasets, helping scale entity resolution pipelines effectively.

Indexing in entity resolution helps address the main computational challenge, which is the large number of potential candidate pairs. Traditional methods rely on the following:

  • Lexical matching (also known as exact matching), such as SB

  • Lexical sorting, like in SN

Both methods work within the original feature space. Let’s explore how to measure similarity in a vector space and how to complement lexical by semantic search.

Concept

Let r1,,rnr_1,\ldots,r_n​ denote our dataset of size nn ...