[1910.13267v1] BPE-Dropout: Simple and Effective Subword Regularization
Models trained with BPE-dropout (1) outperform BPE and the previous subword regularization on a wide range of translation tasks, (2) have better quality of learned embeddings, (3) are more robust to noisy input
Abstract Subword segmentation is widely used to address the open vocabulary problem in machine translation. The dominant approach to subword segmentation is Byte Pair Encoding (BPE), which keeps the most frequent words intact while splitting the rare ones into multiple tokens. While multiple segmentations are possible even with the same vocabulary, BPE splits words into unique sequences; this may prevent a model from better learning the compositionality of words and being robust to segmentation errors. So far, the only way to overcome this BPE imperfection, its deterministic nature, was to create another subword segmentation algorithm (Kudo, 2018). In contrast, we show that BPE itself incorporates the ability to produce multiple segmentations of the same word. We introduce BPE-dropout – simple and effective subword regularization method based on and compatible with conventional BPE. It stochastically corrupts the segmentation procedure of BPE, which leads to producing multiple segmentations within the same fixed BPE framework. Using BPE-dropout during training and the standard BPE during inference improves translation quality up to 3 BLEU compared to BPE and up to 0.9 BLEU compared to the previous subword regularization.
‹Figure 1 Figure 2 Figure 3: Segmentation process of the word ‘unrelated’ using (a) BPE, (b) BPE-dropout. Hyphens indicate possible merges (merges which are present in the merge table); merges performed at each iteration are shown in green, dropped – in red. (Introduction)Figure 4: BLEU scores for the models trained with BPE-dropout with different values of p. WMT14 EnFr, 500k sentence pairs. (Choice of the value of p)Figure 5: BLEU scores. Models trained on random subsets of WMT14 En-Fr. (Varying corpora and vocabulary size)Figure 6 Figure 7 Figure 8: Distributions of length (in tokens) of (a) the French part of WMT14 En-Fr test set segmented using BPE or BPE-dropout; and (b) the generated translations for the same test set by models trained with BPE or BPE-dropout. (Inference time and length of generated sequences)Figure 9: Examples of nearest neighbours in the source embedding space of models trained with BPE and BPEdropout Models trained on WMT14 En-Fr (4m). (Analysis)Figure 10: Distribution of token to substring ratio for texts segmented using BPE or BPE-dropout for the same vocabulary of 32k tokens; only 10% most frequent substrings are shown. (Token to substring ratio of a token is the ratio between its frequency as an individual token and as a sequence of characters.) (Substring frequency)Figure 11: BPE Figure 12: BPE-dropout Figure 13: Visualization of source embeddings. Models trained on WMT14 En-Fr (4m). (Properties of the learned embeddings)›