German BERT

Learn about the German BERT model and how to use auto classes with it to simplify model selection and configuration.

German BERT was developed by an organization called deepset.ai. They trained the BERT model from scratch using German text. The pre-trained German BERT model is open-sourced and free to use. German BERT is trained using recent German Wikipedia text, news articles, and data from OpenLegalData on a single Cloud TPU v2 for a period of 9 days.

German BERT is evaluated on many downstream tasks, including classification, NER, document classification, and more. German BERT outperforms M-BERT on all these tasks. We can use the pre-trained German BERT model directly with the Hugging Face's transformers library.

Using auto classes with German BERT

Let's use the auto classes from the transformers library. The auto classes will automatically identify the correct architecture of the model and create the relevant class just with the model name we are passing. Let's explore this in more detail.

Importing modules

First, we import the AutoTokenizer and AutoModel modules:

Get hands-on with 1200+ tech skills courses.