Naive Bayes
Intro
Mahout currently has two Naive Bayes Map-Reduce implementations. The first is standard Multinomial Naive Bayes. The second is an implementation of Transformed Weight-normalized Complement Naive Bayes as introduced by Rennie et al. [1]. We refer to the former as Bayes and the latter as CBayes.
Where Bayes has long been a standard in text classification, CBayes is an extension of Bayes that performs particularly well on datasets with skewed classes and has been shown to be competitive with algorithms of higher complexity such as Support Vector Machines.
Implementations
Both Bayes and CBayes are currently trained via MapReduce Jobs. Testing and classification can be done via a MapReduce Job or sequentially. Mahout provides CLI drivers for preprocessing, training and testing. A Spark implementation is currently in the works (MAHOUT-1493).
Preprocessing and Algorithm
As described in [1] Mahout Naive Bayes is broken down into the following steps (assignments are over all possible index values):
- Let
\(\vec{d}=(\vec{d_1},...,\vec{d_n})\)
be a set of documents; \(d_{ij}\)
is the count of word \(i\)
in document \(j\)
.
- Let
\(\vec{y}=(y_1,...,y_n)\)
be their labels.
- Let
\(\alpha_i\)
be a smoothing parameter for all words in the vocabulary; let \(\alpha=\sum_i{\alpha_i}\)
.
- Preprocessing(via seq2Sparse) TF-IDF transformation and L2 length normalization of
\(\vec{d}\)
\(d_{ij} = \sqrt{d_{ij}}\)
\(d_{ij} = d_{ij}\left(\log{\frac{\sum_k1}{\sum_k\delta_{ik}+1}}+1\right)\)
\(d_{ij} =\frac{d_{ij}}{\sqrt{\sum_k{d_{kj}^2}}}\)
- Training: Bayes
\((\vec{d},\vec{y})\)
calculate term weights \(w_{ci}\)
as:
\(\hat\theta_{ci}=\frac{d_{ic}+\alpha_i}{\sum_k{d_{kc}}+\alpha}\)
\(w_{ci}=\log{\hat\theta_{ci}}\)
- Training: CBayes
\((\vec{d},\vec{y})\)
calculate term weights \(w_{ci}\)
as:
\(\hat\theta_{ci} = \frac{\sum_{j:y_j\neq c}d_{ij}+\alpha_i}{\sum_{j:y_j\neq c}{\sum_k{d_{kj}}}+\alpha}\)
\(w_{ci}=-\log{\hat\theta_{ci}}\)
\(w_{ci}=\frac{w_{ci}}{\sum_i \lvert w_{ci}\rvert}\)
- Label Assignment/Testing:
- Let
\(\vec{t}= (t_1,...,t_n)\)
be a test document; let \(t_i\)
be the count of the word \(t\)
.
- Label the document according to
\(l(t)=\arg\max_c \sum\limits_{i} t_i w_{ci}\)
As we can see, the main difference between Bayes and CBayes is the weight calculation step. Where Bayes weighs terms more heavily based on the likelihood that they belong to class \(c\)
, CBayes seeks to maximize term weights on the likelihood that they do not belong to any other class.
Running from the command line
Mahout provides CLI drivers for all above steps. Here we will give a simple overview of Mahout CLI commands used to preprocess the data, train the model and assign labels to the training set. An example script is given for the full process from data acquisition through classification of the classic 20 Newsgroups corpus.
-
Preprocessing:
For a set of Sequence File Formatted documents in PATH_TO_SEQUENCE_FILES the mahout seq2sparse command performs the TF-IDF transformations (-wt tfidf option) and L2 length normalization (-n 2 option) as follows:
mahout seq2sparse
-i ${PATH_TO_SEQUENCE_FILES}
-o ${PATH_TO_TFIDF_VECTORS}
-nv
-n 2
-wt tfidf
-
Training:
The model is then trained using mahout trainnb
. The default is to train a Bayes model. The -c option is given to train a CBayes model:
mahout trainnb
-i ${PATH_TO_TFIDF_VECTORS}
-o ${PATH_TO_MODEL}/model
-li ${PATH_TO_MODEL}/labelindex
-ow
-c
-
Label Assignment/Testing:
Classification and testing on a holdout set can then be performed via mahout testnb
. Again, the -c option indicates that the model is CBayes. The -seq option tells mahout testnb
to run sequentially:
mahout testnb
-i ${PATH_TO_TFIDF_TEST_VECTORS}
-m ${PATH_TO_MODEL}/model
-l ${PATH_TO_MODEL}/labelindex
-ow
-o ${PATH_TO_OUTPUT}
-c
-seq
Command line options
Examples
Mahout provides an example for Naive Bayes classification:
- Classify 20 Newsgroups
References