Customizing the Tokenizer and Sentence Segmentation
Explore how to customize spaCy's tokenizer by adding special case rules for domain-specific terms and understand the complexity of sentence segmentation. Learn to debug tokenization processes and use spaCy's dependency parser for accurate sentence boundary detection, preparing you for effective token-level text processing.
We'll cover the following...
When we work with a specific domain, such as medicine, insurance, or finance, we often come across words, abbreviations, and entities that need special attention. Most domains we'll process have characteristic words and phrases that need custom tokenization rules. Here's how to add a special case rule to an existing Tokenizer class instance:
Here is what we did:
We again started by importing
spacy.Then, we imported the
ORTHsymbol, which means orthography; that is, text.We continued ...