Search⌘ K
AI Features

Japanese BERT

Explore how to use Japanese BERT models by understanding their tokenization methods with MeCab and WordPiece, the differences between subword and character splits, and how to apply pre-trained models for Japanese sentence representation using practical coding examples.

We'll cover the following...

The Japanese BERT model is pre-trained using the Japanese Wikipedia text with WWM. We tokenize the Japanese texts using MeCab. ...