00:00:00

Share Your Feedback 🏝️

Tokenization | Subword Regularization

Tokenization | Subword Regularization

MinWoo(Daniel) Park | Tech Blog

Read more
Previous: Self-Improving for NER Next: Model | Orca 2

Tokenization | Subword Regularization

  • Related Project: Private
  • Category: Paper Review
  • Author: MinWoo Park
  • Date:2023-11-18

Subword Regularization: Improving Neural Network Translation Models with Multiple Subword Candidates

  • url: https://arxiv.org/abs/1804.10959
  • pdf: https://arxiv.org/pdf/1804.10959
  • abstract: Subword units are an effective way to alleviate the open vocabulary problems in neural machine translation (NMT). While sentences are usually converted into unique subword sequences, subword segmentation is potentially ambiguous and multiple segmentations are possible even with the same vocabulary. The question addressed in this paper is whether it is possible to harness the segmentation ambiguity as a noise to improve the robustness of NMT. We present a simple regularization method, subword regularization, which trains the model with multiple subword segmentations probabilistically sampled during training. In addition, for better subword sampling, we propose a new subword segmentation algorithm based on a unigram language model. We experiment with multiple corpora and report consistent improvements especially on low resource and out-of-domain settings.

[토크나이저 핵심색인마킹]


Contents

TL;DR


  • 신경 기계 번역에 다중 하위 단어 분할 사용
  • 하위 단어 정규화를 통한 번역 정확도 및 강인성 향상
  • 다양한 언어 및 데이터셋에서의 실험적 검증

1. 서론

신경 기계 번역(Neural Machine Translation, NMT)은 고정된 단어 사전에 의존하여 훈련 및 인퍼런스이 이루어지며, 특히 열린 어휘 설정에서 번역의 정확도를 저하시키는 알 수 없는 단어의 양을 증가시킵니다. 이를 극복하기 위해 일반적인 접근 방식은 드문 단어를 하위 단어 단위로 분할하는 것입니다. Byte-Pair-Encoding(BPE)은 많은 NMT 시스템에서 적용되어 우수한 번역 품질을 달성하고 있습니다. 그러나 BPE는 하나의 문장을 여러 하위 단어 시퀀스로 표현할 수 있으며, 이런 시퀀스는 NMT에 의해 서로 다른 입력으로 처리됩니다.

본 연구에서는 열린 어휘 NMT를 위한 새로운 정규화 방법인 하위 단어 정규화를 제안합니다. 이 방법은 다중 하위 단어 분할을 사용하여 NMT 모델의 정확성과 강인성을 향상시킵니다. 구체적으로, 다중 분할 후보를 통합하는 간단한 NMT 훈련 알고리즘을 제안하고, 언어 모델을 기반으로 하는 새로운 하위 단어 분할 알고리즘을 제안합니다. 이 언어 모델은 분할 도중 생성되는 노이즈를 모방할 수 있습니다.

\[\mathcal{L}( ext) = \sum_{(X,Y)\in\mathcal{D}}\log\sum_{x,y}P(y\|x,X,\theta)P(x\|X)P(y\|Y)\]

위의 수식은 하위 단어 시퀀스에 대한 여러가지 분할 가능성을 고려하여 최적화된 파라미터 세트 $\theta$를 사용하여 마진 확률을 최적화합니다.


2. 다중 하위 단어 분할을 이용한 신경 기계 번역

2.1 현장에서의 하위 단어 샘플링을 통한 NMT 훈련

주어진 원본 문장 $X$와 대상 문장 $Y$에 대해, $x = (x_1, \dots, x_M)$ 및 $y = (y_1, \dots, y_N)$를 해당 하위 단어 시퀀스로 분할합니다. NMT는 다음과 같이 대상 언어 시퀀스 모델을 모델링합니다.

\[P(y_n\|y_{<n}, x, \theta) = \text{NMT}(y_{<n}, x; \theta)\]

$\theta$는 모델 파라미터 세트입니다. 하위 단어 정규화는 이 아키텍처에 특정하지 않으며 다른 NMT 아키텍처에도 적용 가능합니다. NMT는 주어진 병렬 코퍼스 $\mathcal{D}$의 로그 가능도를 최대화함으로써 표준 최대 가능도 추정을 사용하여 훈련됩니다.

2.2 디코딩

NMT의 디코딩에서는 원본 문장 $X$만 갖고 있습니다. 디코딩을 위한 직접적인 접근 방식은 최대 확률을 가지는 최적의 분할 $x^*$를 번역하는 것입니다.

\[x^* = \arg\max_x P(x\|X)\]

또한, 다중 분할 후보를 포함하기 위해 $n$-best 분할을 사용할 수 있습니다. 이 경우, 다음 점수를 최대화하는 최적의 번역 $y^*$를 선택합니다.

\[y^* = \arg\max_y \left\{\max_{1\leq i\leq n}\log P(y\|x_i,X,\theta) + \lambda\log P(x_i\|X) - \lambda\|y\|\right\}\]

$|y|$는 $y$의 하위 단어 수이며, $\lambda\in\mathbb{R}^+$는 더 짧은 문장을 처벌하는 파라미터입니다.


3. 언어 모델을 이용한 하위 단어 분할

3.1 Byte-Pair-Encoding (BPE)

BPE는 많은 NMT 시스템에서 널리 사용되는 하위 단어 분할 알고리즘입니다. BPE는 전체 문장을 개별 문자로 분할한 후 가장 빈번한 인접 문자 쌍을 원하는 어휘 크기에 도달할 때까지 연속적으로 병합합니다. 이런 병합 작업을 테스트 문장에 적용하여 하위 단어 분할을 수행합니다.

3.2 단일어 언어 모델

이 논문에서는 확률로 다중 하위 단어 분할을 출력할 수 있는 새로운 하위 단어 분할 알고리즘을 기반으로 하는 단일어 언어 모델을 제안합니다. 단일어 언어 모델은 각 하위 단어가 독립적으로 발생한다고 가정합니다.


4. 관련 연구

하위 단어 정규화는 입력 문장에 임의로 샘플링된 다양한 데이터 입력으로 모델을 훈련시키는 앙상블 훈련의 변형으로 간주됩니다. 이전의 연구들은 입력 문장에 노이즈를 주입하는 기법을 사용했지만, 실제 훈련 및 인퍼런스에서의 노이즈를 항상 반영하지는 못했습니다. 하위 단어 정규화는 이런 문제를 개선하기 위해 제안되었습니다.


5. 실험

다양한 크기와 언어의 코퍼스를 사용한 실험을 통해 하위 단어 정규화가 NMT 모델의 정확성과 강인성을 크게 향상시킴을 입증했습니다. 또한, 오픈 도메인 설정에서의 효과를 평가하기 위해 표준 인 도메인 평가와 비교하여 유의미한 개선을 보였습니다.


1 Introduction

Neural Machine Translation (NMT) models (Bahdanau et al., 2014; Luong et al., 2015; Wu et al., 2016; Vaswani et al., 2017) often operate with fixed word vocabularies, as their training and inference depend heavily on the vocabulary size. However, limiting vocabulary size increases the amount of unknown words, which makes the translation inaccurate especially in an open vocabulary setting.

A common approach for dealing with the open vocabulary issue is to break up rare words into subword units (Schuster and Nakajima, 2012; Chitnis and DeNero, 2015; Sennrich et al., 2016; Wu et al., 2016). Byte-Pair-Encoding (BPE) (Sen nrich et al., 2016) is a de facto standard subword segmentation algorithm applied to many NMT systems and achieving top translation quality in several shared tasks (Denkowski and Neubig, 2017; Nakazawa et al., 2017). BPE segmentation gives a good balance between the vocabulary size and the decoding efficiency, and also sidesteps the need for a special treatment of unknown words. BPE encodes a sentence into a unique subword sequence. However, a sentence can be represented in multiple subword sequences even with the same vocabulary. Table 1 illustrates an example. While these sequences encode the same input “Hello World”, NMT handles them as completely different inputs. This observation becomes more apparent when converting subword sequences into id sequences (right column in Table 1). These variants can be viewed as a spurious ambiguity, which might not always be resolved in decoding process. At training time of NMT, multiple segmentation candidates will make the model robust to noise and segmentation errors, as they can indirectly help the model to learn the compositionality of words, e.g., “books” can be decomposed into “book” + “s”.

Table 1: Multiple subword sequences encoding the same sentence “Hello World”

In this study, we propose a new regularization method for open-vocabulary NMT, called subword regularization, which employs multiple subword segmentations to make the NMT model accurate and robust. Subword regularization consists of the following two sub-contributions:

  • We propose a simple NMT training algorithm to integrate multiple segmentation candidates. Our approach is implemented as an on-the-fly data sampling, which is not specific to NMT architecture. Subword regular- ization can be applied to any NMT system without changing the model structure.
  • We also propose a new subword segmentation algorithm based on a language model, which provides multiple segmentations with probabilities. The language model allows to emulate the noise generated during the segmentation of actual data.

We here assume that the source and target sentences $X$ and $Y$ can be segmented into multiple subword sequences with the segmentation probabilities \(P(x\\|X)\) and \(P(y\\|Y)\) respectively. In subword regularization, we optimize the parameter set \(\theta\) with the marginalized likelihood as (3):

\[\mathcal{L}( ext) = \sum_{(X,Y)\in\mathcal{D}}\log\sum_{x,y}P(y\\|x,X,\theta)P(x\\|X)P(y\\|Y)\]

Empirical experiments using multiple corpora with different sizes and languages show that subword regularization achieves significant improvements over the method using a single subword sequence. In addition, through experiments with out-of-domain corpora, we show that subword regularization improves the robustness of the NMT model.

Exact optimization of (3) is not feasible as the number of possible segmentations increases exponentially with respect to the sentence length. We approximate (3) with finite $k$ sequences sampled from \(P(x\\|X)\) and \(P(y\\|Y)\) respectively.

2 Neural Machine Translation with multiple subword segmentations

2.1 NMT training with on-the-fly subword sampling

Given a source sentence \(X\) and a target sentence \(Y\), let \(x = (x_1, \dots, x_M)\) and \(y = (y_1, \dots, y_N)\) be the corresponding subword sequences segmented with an underlying subword segmenter, e.g., BPE. NMT models the translation probability \(P(Y\\|X) = P(y\\|x)\) as a target language sequence model that generates target subword \(y_n\) conditioning on the target history \(y_{<n}\) and source input sequence \(x\):

\[P(y_n\\|y_{<n}, x, \theta) = \text{NMT}(y_{<n}, x; \theta)\]

where $\theta$ is a set of model parameters. A common choice to predict the subword \(y_n\) is to use a recurrent neural network (RNN) architecture. However, note that subword regularization is not specific to this architecture and can be applicable to other NMT architectures without RNN, e.g., (Vaswani et al., 2017; Gehring et al., 2017).

NMT is trained using the standard maximum likelihood estimation, i.e., maximizing the log-likelihood \(\mathcal{L}( ext)\) of a given parallel corpus \(\mathcal{D}\):

\[\mathcal{L}( ext) = \sum_{(X,Y)\in\mathcal{D}}\log P(Y\\|X,\theta)\]

For the sake of simplicity, we use $k=1$. Training of NMT usually uses an online training for efficiency, in which the parameter \(\theta\) is iteratively optimized with respect to the smaller subset of \(\mathcal{D}\) (mini-batch). When we have a sufficient number of iterations, subword sampling is executed via the data sampling of online training, which yields a good approximation of (3) even if $k=1$. It should be noted, however, that the subword sequence is sampled on-the-fly for each parameter update.

2.2 Decoding

In the decoding of NMT, we only have a raw source sentence $X$. A straightforward approach for decoding is to translate from the best segmentation $x^*$ that maximizes the probability \(P(x\\|X)\), i.e., \(x^* = \arg\max_x P(x\\|X)\). Additionally, we can use the \(n\)-best segmentations of \(P(x\\|X)\) to incorporate multiple segmentation candidates. More specifically, given \(n\)-best segmentations \((x_1, \dots, x_n)\), we choose the best translation \(y^*\) that maximizes the following score:

\[y^* = \arg\max_y \left\{\max_{1\leq i\leq n}\log P(y\\|x_i,X,\theta) + \lambda\log P(x_i\\|X) - \lambda\\|y\\|\right\}\]

where $|y|$ is the number of subwords in $y$ and \(\lambda\in\mathbb{R}^+\) is the parameter to penalize shorter sentences. \(\lambda\) is optimized with the development data.

In this paper, we call these two algorithms one of the best decoding and $n$-best decoding respectively.

3 Subword segmentations with language model

3.1 Byte-Pair-Encoding (BPE)

Byte-Pair-Encoding (BPE) (Sennrich et al., 2016; Schuster and Nakajima, 2012) is a subword segmentation algorithm widely used in many NMT systems. BPE first splits the whole sentence into individual characters. The most frequent adjacent pairs of characters are then consecutively merged until reaching a desired vocabulary size. Subword segmentation is performed by applying the same merge operations to the test sentence.

An advantage of BPE segmentation is that it can effectively balance the vocabulary size and the step size (the number of tokens required to encode the sentence). BPE trains the merged operations only with a frequency of characters. Frequent substrings will be joined early, resulting in common words remaining as one unique symbol. Words consisting of rare character combinations will be split into smaller units, e.g., substrings or characters. Therefore, only with a small fixed size of vocabulary (usually 16k to 32k), the number of required symbols to encode a sentence will not significantly increase, which is an important feature for an efficient decoding.

One downside is, however, that BPE is based on a greedy and deterministic symbol replacement, which can not provide multiple segmentations with probabilities. It is not trivial to apply BPE to the subword regularization that depends on segmentation probabilities \(P(x\\|X)\).

3.2 Unigram language model

In this paper, we propose a new subword segmentation algorithm based on a unigram language model, which is capable of outputing multiple subword segmentations with probabilities. The unigram language model makes an assumption that each subword occurs independently, and consequently, the probability of a subword sequence

Strictly speaking, wordpiece model (Schuster and Nakajima, 2012) is different from BPE. We consider wordpiece as a variant of BPE, as it also uses an incremental vocabulary generation with a different loss function.

Wordpiece model uses a likelihood instead of frequency.

In the real setting, however, the vocabulary set $V$ is also unknown. Because the joint optimization of vocabulary set and their occurrence probabilities is intractable, we here seek to find them with the following iterative algorithm:

  1. Heuristically make a reasonably big seed vocabulary from the training corpus.
  2. Repeat the following steps until $\|V\|$ reaches a desired vocabulary size:

    (a) Fixing the set of vocabulary, optimize $p(x)$ with the EM algorithm. (b) Compute the $\text{loss}_i$ for each subword $x_i$, where $\text{loss}_i$ represents how likely the likelihood $\mathcal{L}$ is reduced when the subword $x_i$ is removed from the current vocabulary. (c) Sort the symbols by $\text{loss}_i$ and keep top $\eta\%$ of subwords ($\eta$ is 80, for example). Note that we always keep the subwords consisting of a single character to avoid out-of-vocabulary. Target sequence $y = (y_1, \dots, y_N)$ can also be modeled similarly.

There are several ways to prepare the seed vocabulary. The natural choice is to use the union of all characters and the most frequent substrings in the corpus$^4$. Frequent substrings can be enumerated in $O(T)$ time and $O(20T)$ space with the Enhanced Suffix Array algorithm (Nong et al., 2009), where $T$ is the size of the corpus. Similar to (Sennrich et al., 2016), we do not consider subwords that cross word boundaries.

As the final vocabulary $V$ contains all individual characters in the corpus, character-based segmentation is also included in the set of segmentation candidates $\mathcal{S}(X)$. In other words, subword segmentation with the unigram language model can be seen as a probabilistic mixture of characters, subwords and word segmentations.

3.3 Subword sampling

Subword regularization samples one subword segmentation from the distribution $P(x\|X)$ for each parameter update. A straightforward approach for an approximate sampling is to use the $l$-best segmentations. More specifically, we first obtain $l$-best segmentations according to the probability $P(x\|X)$. $l$-best search is performed in linear time with the Forward-DP Backward-A* algorithm (Nagata, 1994). One segmentation $x_i$ is then sampled from the multinomial distribution $P(x_i\|X) \propto P(x_i)^\alpha / \sum_{i=1}^l P(x_i)^\alpha$, where $\alpha \in \mathbb{R}^+$ is the hyperparameter to control the smoothness of the distribution. A smaller $\alpha$ leads to sample $x_i$ from a more uniform distribution. A larger $\alpha$ tends to select the Viterbi segmentation.

Setting l → ∞, in theory, allows to take all possible segmentations into account. However, it is not feasible to increase l explicitly as the number of candidates increases exponentially with respect to the sentence length. In order to exactly sample from all possible segmentations, we use the Forward-Filtering and Backward-Sampling algorithm (FFBS) (Scott, 2002), a variant of the dynamic programming originally introduced by Bayesian hidden Markov model In FFBS, all segmentation candidates are represented in a compact lattice structure, where each node denotes a subword. In the first pass, FFBS computes a set of forward probabilities for all subwords in the lattice, which provide the probability of ending up in any particular subword w. In the second

training

It is also possible to run BPE with a sufficient number of merge operations.

pass, traversing the nodes in the lattice from the end of the sentence to the beginning of the sentence, subwords are recursively sampled for each branch according to the forward probabilities.

3.4 BPE vs. Unigram language model

BPE was originally introduced in the data compression literature (Gage, 1994). BPE is a variant of dictionary (substitution) encoder that incrementally finds a set of symbols such that the total number of symbols for encoding the text is minimized. On the other hand, the unigram language model is reformulated as an entropy encoder that minimizes the total code length for the text. According to Shannon’s coding theorem, the optimal code length for a symbol s is − log ps, where ps is the occurrence probability of s. This is essentially the same as the segmentation strategy of the unigram language model described as (7).

BPE and the unigram language model share the same idea that they encode a text using fewer bits with a certain data compression principle (dictionary vs. entropy). Therefore, we expect to see the same benefit as BPE with the unigram language model. However, the unigram language model is more flexible as it is based on a probabilistic language model and can output multiple segmentations with their probabilities, which is an essential requirement for subword regularization.

Regularization by noise is a well studied technique in deep neural networks. A well-known example is dropout (Srivastava et al., 2014), which randomly turns off a subset of hidden units during training. Dropout is analyzed as an ensemble training, where many different models are trained on different subsets of the data. Subword regularization trains the model on different data inputs randomly sampled from the original input sentences, and thus is regarded as a variant of ensemble training.

The idea of noise injection has previously been used in the context of Denoising Auto-Encoders (DAEs) (Vincent et al., 2008), where noise is added to the inputs and the model is trained to reconstruct the original inputs. There are a couple of studies that employ DAEs in natural language processing.

(Lample et al., 2017; Artetxe et al., 2017) independently propose DAEs in the context of sequence-to-sequence learning, where they randomly alter the word order of the input sentence and the model is trained to reconstruct the original sentence. Their technique is applied to an unsupervised machine translation to make the encoder truly learn the compositionality of input sentences. Word dropout (Iyyer et al., 2015) is a simple approach for a bag-of-words representation, in which the embedding of a certain word sequence is simply calculated by averaging the word embeddings. Word dropout randomly drops words from the bag before averaging word embeddings, and consequently can see $2\|X\|$ different token sequences for each input X.

(Belinkov and Bisk, 2017) explore the training of character-based NMT with a synthetic noise that randomly changes the order of characters in a word. (Xie et al., 2017) also proposes a robust RNN language model that interpolates random unigram language model.

The basic idea and motivation behind subword regularization are similar to those of previous work. In order to increase the robustness, they inject noise to input sentences by randomly changing the internal representation of sentences. However, these previous approaches often depend on heuristics to generate synthetic noises, which do not always reflect the real noises on training and inference. In addition, these approaches can only be applied to source sentences (encoder), as they irreversibly rewrite the surface of sentences. Subword regularization, on the other hand, generates synthetic subword sequences with an underlying language model to better emulate the noises and segmentation errors. As subword regularization is based on an invertible conversion, we can safely apply it both to source and target sentences.

Subword regularization can also be viewed as a data augmentation. In subword regularization, an input sentence is converted into multiple invariant sequences, which is similar to the data augmentation for image classification tasks, for example, random flipping, distorting, or cropping.

There are several studies focusing on segmentation ambiguities in language modeling. Latent Sequence Decompositions (LSDs) (Chan et al., 2016) learns the mapping from the input and the output by marginalizing over all possible segmentations. LSDs and subword regularization do not assume a predetermined segmentation for a sentence, and take multiple segmentations by a similar marginalization technique. The difference is that subword regularization injects the multiple segmentations with a separate language model through an on-the-fly subword sampling. This approach makes the model simple and independent from NMT architectures.

Lattice-to-sequence models (Su et al., 2017; Sperber et al., 2017) are natural extension of sequence-to-sequence models, which represent inputs uncertainty through lattices. Lattice is encoded with a variant of TreeLSTM (Tai et al., 2015), which requires changing the model archiIn addition, while subword regularizatecture. tion is applied both to source and target sentences, lattice-to-sequence models do not handle target side ambiguities.

A mixed word/character model (Wu et al., 2016) addresses the out-of-vocabulary problem with a fixed vocabulary. In this model, out-ofvocabulary words are not collapsed into a single UNK symbol, but converted into the sequence of characters with special prefixes representing the positions in the word. Similar to BPE, this model also encodes a sentence into a unique fixed sequence, thus multiple segmentations are not taken into account.

5 Experiments

5.1 Setting

We conducted experiments using multiple corpora with different sizes and languages. Table 2 summarizes the evaluation data we used [5 6 7 8 9 10]. IWSLT15/17 and KFTT are relatively small corpora, which include a wider spectrum of languages with different linguistic properties. They can evaluate the language-agnostic property of subword regularization. ASPEC and WMT14 (en↔de) are medium-sized corpora. WMT14 (en↔cs) is a rather big corpus consisting of more than 10M parallel sentences.

We used GNMT (Wu et al., 2016) as the implementation of the NMT system for all experiments. We generally followed the settings and training procedure described in (Wu et al., 2016), however, we changed the settings according to the corpus size. Table 2 shows the hyperparameters we used in each experiment. As common settings, we set the dropout probability to be 0.2. For parameter estimation, we used a combination of Adam (Kingma and Adam, 2014) and SGD algorithms. Both length normalization and converge penalty parameters are set to 0.2 (see section 7 in (Wu et al., 2016)).

IWSLT15: http://workshop2015.iwslt.org/

IWSLT17: http://workshop2017.iwslt.org/

KFTT: http://www.phontron.com/kftt/jp/ASPEC

ASPEC: http://lotus.kuee.kyoto-u.a

WMT14: http://statmt.org/wmt14/

WMT14(en↔de) uses the same setting as (Wu et al.,2016).

We set the decoding beam size to 4.

The data was preprocessed with Moses tokenizer before training subword models. It should be noted, however, that Chinese and Japanese have no explicit word boundaries and Moses tokenizer does not segment sentences into words, and hence subword segmentations are trained almost from unsegmented raw sentences in these languages.

We used the case sensitive BLEU score (Papineni et al., 2002) as an evaluation metric. As the output sentences are not segmented in Chinese and Japanese, we segment them with characters and KyTea11 for Chinese and Japanese respectively before calculating BLEU scores.

BPE segmentation is used as a baseline system. We evaluate three test systems with different sampling strategies: (1) Unigram language model-based subword segmentation without subword regularization ($l=1$), (2) with subword regularization ($l=64, \alpha=0.1$) and (3) ($l=\infty, \alpha=0.2/0.5$) $0.2$: IWSLT, $0.5$: others. These sampling parameters were determined with preliminary experiments. $l=1$ is aimed at a pure comparison between BPE and the unigram language model. In addition, we compare one-best decoding and $n$-best decoding (See section 2.2). Because BPE is not able to provide multiple segmentations, we only evaluate one-best decoding for BPE. Consequently, we compare 7 systems ($1 + 3 \times 2$) for each language pair.

Table 3 shows the translation experiment results.

5.2 Main Results

First, as can be seen in the table, BPE and unigram language model without subword regularization (l = 1) show almost comparable BLEU scores. This is not surprising, given that both BPE and the unigram language model are based on data compression algorithms.

We can see that subword regularization (l > 1) boosted BLEU scores quite impressively (+1 to 2 points) in all language pairs except for WMT14 (en→cs) dataset. The gains are larger especially in lower resource settings (IWSLT and KFTT). It can be considered that the positive effects of data augmentation with subword regularization worked better in lower resource settings, which is a common property of other regularization techniques.

http://www.phontron.com/kytea

As for the sampling algorithm, (l = ∞ α = 0.2/0.5) slightly outperforms (l = 64, α = 0.1) on IWSLT corpus, but they show almost comparable results on larger data set. Detailed analysis is described in Section 5.5.

On top of the gains with subword regularization, n-best decoding yields further improvements in many language pairs. However, we should note that the subword regularization is mandatory for n-best decoding and the BLEU score is degraded in some language pairs without subword regularization (l = 1). This result indicates that the decoder is more confused for multiple segmentations when they are not explored at training time.

5.3 Results with out-of-domain corpus

To see the effect of subword regularization on a more open-domain setting, we evaluate the systems with out-of-domain in-house data consisting of multiple genres: Web, patents and query logs. Note that we did not conduct the comparison with KFTT and ASPEC corpora, as we found that the domains of these corpora are too specific12, and preliminary evaluations showed extremely poor BLEU scores (less than 5) on out-of-domain corpora.

Table 4 shows the results. Compared to the gains obtained with the standard in-domain evaluations in Table 3, subword regularization achieves significantly larger improvements (+2 points) in every domain of corpus. An interesting observation is that we have the same level of improvements even on large training data sets (WMT14), which showed marginal or small gains with the in-domain data. This result strongly supports our claim that subword regularization is more useful for open-domain settings.

5.4 Comparison with other segmentation algorithms

Word/Character (Wu et al., 2016), BPE (Sennrich et al., 2016) and our unigram model with or without subword regularization. The BLEU scores of word, character and mixed word/character models are cited from (Wu et al., 2016). As German is a morphologically rich language and needs a huge vocabulary for word models, subword-based algorithms perform a gain of more than 1 BLEU point than word model. Among subword-based algorithms, the unigram language model with subword regularization achieved the best BLEU score (25.04), which demonstrates the effectiveness of multiple subword segmentations.

5.5 Impact of sampling hyperparameters

Subword regularization has two hyperparameters: $l$: size of sampling candidates, $\alpha$: smoothing constant. Figure 1 shows the BLEU scores of various hyperparameters on IWSLT15 (en → vi) dataset. First, we can find that the peaks of BLEU scores against smoothing parameter $\alpha$ are different depending on the sampling size $l$. This is expected, because $l = \infty$ has larger search space than $l = 64$, and needs to set $\alpha$ larger to sample sequences close to the Viterbi sequence $x^*$.

Another interesting observation is that $\alpha = 0.0$ leads to performance drops especially on $l = \infty$. When $\alpha = 0.0$, the segmentation probability $P(x|X)$ is virtually ignored and one segmentation is uniformly sampled. This result suggests that biased sampling with a language model is helpful to emulate the real noise in the actual translation.

In general, larger $l$ allows a more aggressive regularization and is more effective for low resource settings such as IWSLT. However, the estimation of $\alpha$ is more sensitive and performance becomes even worse than baseline when $\alpha$ is extremely small. To weaken the effect of regularization and avoid selecting invalid parameters, it might be more reasonable to use $l = 64$ for high resource languages.

Table 5: Comparison of different segmentation algorithms (WMT14 en→de)

Although we can see in general that the optimal hyperparameters are roughly predicted with the held-out estimation, it is still an open question how to choose the optimal size $l$ in subword sampling.

5.6 Results with single side regularization

Table 6 summarizes the BLEU scores with subword regularization either on source or target sentence to figure out which components (encoder or decoder) are more affected. As expected, we can see that the BLEU scores with single side regularization are worse than full regularization. However, it should be noted that single side regularization still has positive effects. This result implies that subword regularization is not only helpful for encoder-decoder architectures, but applicable to other NLP tasks that only use an either encoder or decoder, including text classification (Iyyer et al., 2015) and image caption generation (Vinyals et al., 2015).

Figure 1: Effect of sampling hyperparameters

Table 6: Comparison on different regularization strategies (IWSLT15/17, $l = 64$, $\alpha = 0.1$)

6 Conclusions

In this paper, we presented a simple regularization method, subword regularization13, for NMT, with no change to the network architecture. The central idea is to virtually augment training data with on-the-fly subword sampling, which helps to improve the accuracy as well as robustness of NMT models. In addition, for better subword sampling, we propose a new subword segmentation algorithm based on the unigram language model. Experiments on multiple corpora with different sizes and languages show that subword regularization leads to significant improvements especially on low resource and open-domain settings.

Promising avenues for future work are to apply subword regularization to other NLP tasks based on encoder-decoder architectures, e.g., dialog generation (Vinyals and Le, 2015) and automatic summarization (Rush et al., 2015). Compared to machine translation, these tasks do not have enough training data, and thus there could be a large room for improvement with subword regularization. Additionally, we would like to explore the application of subword regularization for machine learning, including Denoising Auto Encoder (Vincent et al., 2008) and Adversarial Training (Goodfellow et al., 2015).

Implementation is available at github

Previous: Self-Improving for NER Next: Model | Orca 2

post contain ""

    No matching posts found containing ""