Insufficient training data is arguably the most challenging issue faced by natural language processing, better known as NLP. For those who do not know, NLP is an incredibly diverse field containing various distinct tasks. In most cases, task-specific datasets consist of hundreds or thousands of training examples (human-labeled.)
With that said, modern deep learning natural language processing models benefit from significant amounts of data. They require millions, if not billions, of training examples with annotations. Researchers have been hard at work, inventing a solution to close this data gap. They developed various techniques to train GPL (general-purpose language) models utilizing tons and tons of annotated text, also called pre-training.
Programmers can then fine-tune the pre-trained models on NLP tasks with small data such as sentiment analysis and question answering, resulting in incredibly accuracy improvements that are significantly better than training datasets from the beginning.
What is BERT?
BERT, short for Bidirectional Encoder Representations from Transformers, has been making massive waves in the machine learning landscape. A group of researchers working at Google AI language published BERT recently. BERT is causing a stir because of its incredibly accurate results in various natural language programming tasks such as MNLI (natural language inference), Squad V1.1 (question answering), and several others.
Implementing the Transformer’s training is a significant reason why the machine learning community considers BERT an essential technical innovation. BERT’s language modeling promises to take machine learning to new heights. It is opposite to the previous efforts that focused on text sequences starting from right-to-left or left-to-right training.
Results indicate that bidirectionally trained language models have a profound understanding of flow and language context compared to single direction based language models. Bert AI researchers discuss an extensively novel technique called MLM (Masked LM.) The method ensures bidirectional training in the models that were impossible to train before.
How BERT Operates
BERT extensively utilizes Transformer. It is an attention mechanism capable of learning contexts between a text’s words and even the subwords for those who don’t know. In its purest form, a Transformer consists of two distinct mechanisms: an encoder and a decoder. The former reads the input, while the latter creates the task’s prediction.
Surprisingly, BERT only requires the encoding mechanism as its primary objective is to create an efficient language model. A detailed paper by Google’s researchers highlights how the Transformer works.
Contrary to directional models that comprehend the text’s input in a sequence (right to left or left to right), Transformer encoders are vastly different. Why? Because they can read a sequence at one go, hence the term bidirectional. Although, some would argue that non-directional would be a more accurate fit. Using this characteristic lets the model learn a word’s context according to its surroundings.
While there are plenty of challenges when training models, determining a prediction goal is arguably the biggest hassle. As discussed earlier, most models predict words sequentially. Of course, it has been useful for quite a while, but this approach has its limitations. Why? Because it limits learning contexts. BERT overcomes this challenge by utilizing the following training strategies:
Masked LM aka MLM
Before entering word sequences in BERT, a token known as [MASK] replaces fifteen percent of each string’s instructions. The model then tries predicting the masked word’s original value according to the context given by non-masked terms. Technically speaking, output word’s predictions require:
- Implementing a classification layer above the encoder output
- Using the embedding matrix to multiply output vectors and converting them into the language’s vocabulary dimension
- Calculating every word’s probability in the vocabulary by utilizing softmax
Remember, BERT’s loss function only considers the masked value predictions and ignores the non-masked word forecasts. Consequentially, the model unites slower than ordinary directional models, an attribute that occurs because of hyper context-awareness.
You have to be more elaborate when implementing BERT as it does not replace every masked word.
Next Sentence Prediction aka NSP
The model in BERT’s training procedure receives various pairs of input sentences. It learns to forecast whether the second sentence in the string is the following sentence. Fifty percent of the training inputs are a pair where the second sentence is often the subsequent one in the first document. On the other hand, the remaining fifty percent contain random words as second sentences. According to assumptions, random sentences detach from the first one.
To help the model determine the difference between two sentences during training, this is what happens before going into the model:
- A CLS token enters at the first sentence’s beginning, and the [SEP] token enters at each sentence’s end
- A sentence sequence suggesting the addition of sentence A or B to every token. You will see immense similarities between sentence embedding and the vocabulary two embedding
- There is an addition of positional embedding to every token for indicating its particular position. You can understand the idea and incorporation of positional embedding in this transformer paper.
How to Predict the Second Sentence
Here is a list of steps to perform if you want to see if there is a connection between the second and first sentence:
- The overall input sequence undergoes the Transformer model
- The CLS’ output token transforms into a two by one shaped vector, utilizing an ordinary classification layer
- Using softmax to calculate IsNextSequence probability
It is worth noting that Next Sentence Prediction, Masked LM, and BERT model train together in the BERT model. It helps minimize the overall loss function created by the two strategies.
You can utilize BERT for various language tasks. What’s more, the core model will only require a tiny layer
- Performing sentiment analysis and other classification tasks are similar to Next Sentence grouping. Add a classification layer over the Transformer output to get the CLS token
- You can use a BERT to train a Q and A model by implementing two more vectors marking the answer’s beginning and end
- You can also utilize BERT to train a Named Entity Recognition model by feeding each token’s output vector into a classifying layer to predict the Name Entity Recognition Label
The BERT time utilized this technique to attain extraordinary results on various complicated natural language tasks.