Fine tune a BERT model with the use of Colab TPU.

First some info about the BERT model.

Colab walkthrough

# Install transformers and download the greek specific file for model and tokenizer
# This is done only for testing some inputs
# For fine tuning the BERT model we have to load again the model when we build it inside strategy.scope()
!pip install transformers
from transformers import AutoTokenizer, TFAutoModel

tokenizer = AutoTokenizer.from_pretrained("nlpaueb/bert-base-greek-uncased-v1")
model = TFAutoModel.from_pretrained("nlpaueb/bert-base-greek-uncased-v1")
input_x = bert_encode(text_list, bert_preprocess_model)
# creating the model in the TPUStrategy scope places the model on the TPU
model = build_model()
model.compile(tf.keras.optimizers.Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'], steps_per_execution=32)

train_history =
input_x, train_labels,

Model saving/loading on TPUs

save_locally = tf.saved_model.SaveOptions(experimental_io_device='/job:localhost')'./model', options=save_locally) # saving in Tensorflow's "SavedModel" format
with strategy.scope():
load_locally = tf.saved_model.LoadOptions(experimental_io_device='/job:localhost')
model = tf.keras.models.load_model('./model', options=load_locally) # loading in Tensorflow's "SavedModel" format



Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store