Working with the Bag-of-Words representation

The bow module in tmtoolkit contains several functions for working with Bag-of-Words (BoW) representations of documents. It’s divided into two sub-modules: bow.bow_stats and bow.dtm. The former implements several statistics and transformations for BoW representations, the latter contains functions to create and convert sparse or dense document-term matrices (DTMs).

Most of the functions in both sub-modules accept and/or return sparse DTMs. The previous chapter contained a section about what sparse DTMs are and how they can be generated with tmtoolkit.

An example document-term matrix

Before we start with the bow.dtm module, we will generate a sparse DTM from a small example corpus.

[1]:
import random
random.seed(20191113)   # to make the sampling reproducible

import numpy as np
np.set_printoptions(precision=5)

from tmtoolkit.corpus import Corpus, print_summary

corpus = Corpus.from_builtin_corpus('en-NewsArticles', sample=5)
print_summary(corpus)
Corpus with 5 documents in English
> NewsArticles-1206 (135 tokens): Man critical after four - car collision in Dublin ...
> NewsArticles-3665 (1158 tokens): Presidential elections in France have never been a...
> NewsArticles-119 (110 tokens): Is a ' seven - day NHS ' feasible ?    The " seven...
> NewsArticles-2058 (1174 tokens): Merkel : ' Only if Europe is doing well , will Ger...
> NewsArticles-3016 (621 tokens): Farron likens PM 's politics to Trump 's and Putin...
total number of tokens: 3198 / vocabulary size: 1170

We employ a preprocessing pipeline that removes a lot of information from our original data in order to obtain a very condensed DTM.

[2]:
from tmtoolkit.corpus import (lemmatize, filter_for_pos, to_lowercase,
    remove_punctuation, filter_clean_tokens, remove_common_tokens,
    tokens_table)


corpus_norm = lemmatize(corpus, inplace=False)
filter_for_pos(corpus_norm, 'N')
to_lowercase(corpus_norm)
remove_punctuation(corpus_norm)
filter_clean_tokens(corpus_norm, remove_shorter_than=2)
# remove tokens that occur in all documents
remove_common_tokens(corpus_norm, df_threshold=5, proportions=0)

tokens_table(corpus_norm)
[2]:
doc position token is_punct is_stop lemma like_num pos tag
0 NewsArticles-119 0 day False False day False NOUN NN
1 NewsArticles-119 1 nhs False False NHS False PROPN NNP
2 NewsArticles-119 2 day False False day False NOUN NN
3 NewsArticles-119 3 nhs False False NHS False PROPN NNP
4 NewsArticles-119 4 pledge False False pledge False NOUN NN
... ... ... ... ... ... ... ... ... ...
914 NewsArticles-3665 349 article False False article False NOUN NN
915 NewsArticles-3665 350 author False False author False NOUN NN
916 NewsArticles-3665 351 al False False Al False PROPN NNP
917 NewsArticles-3665 352 jazeera False False Jazeera False PROPN NNP
918 NewsArticles-3665 353 policy False False policy.- False NOUN NN

919 rows × 9 columns

We retained all documents, but removed more than half of the token types:

[3]:
from tmtoolkit.corpus import vocabulary_size

len(corpus_norm), vocabulary_size(corpus_norm)
[3]:
(5, 516)

We fetch the document labels and vocabulary and convert them to NumPy arrays, because such arrays allow advanced indexing methods such as boolean indexing.

[4]:
from tmtoolkit.corpus import doc_labels

labels = np.array(doc_labels(corpus_norm))
labels
[4]:
array(['NewsArticles-119', 'NewsArticles-1206', 'NewsArticles-2058',
       'NewsArticles-3016', 'NewsArticles-3665'], dtype='<U17')
[5]:
from tmtoolkit.corpus import vocabulary

vocab = np.array(vocabulary(corpus_norm))
vocab[:10]  # only showing the first 10 token types here
[5]:
array(['110pm', '70', 'abuse', 'access', 'accession', 'accusation', 'act',
       'addition', 'address', 'administration'], dtype='<U16')

Finally, we generate the sparse DTM:

[6]:
from tmtoolkit.corpus import dtm

mat = dtm(corpus_norm)
mat
[6]:
<5x516 sparse matrix of type '<class 'numpy.int32'>'
        with 576 stored elements in Compressed Sparse Row format>

We now have a sparse DTM mat, an array of document labels labels that represent the rows of the DTM and an array of vocabulary tokens vocab that represent the columns of the DTM. We will use this data for the remainder of the chapter.

The bow.dtm module

This module is quite small. Most importantly, there’s a function to convert a DTM to a pandas DataFrame, dtm_to_dataframe. Note that the generated dataframe is dense, i.e. it uses up (much) more memory than the input DTM.

Let’s generate a dataframe from our DTM, the document labels and the vocabulary:

[7]:
from tmtoolkit.bow.dtm import dtm_to_dataframe

dtm_to_dataframe(mat, labels, vocab)
[7]:
110pm 70 abuse access accession accusation act addition address administration ... wing winston work workers world wound year york yucel
NewsArticles-119 0 0 0 1 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 0
NewsArticles-1206 1 1 0 0 0 0 0 0 0 0 ... 0 0 0 0 0 0 0 0 0 2
NewsArticles-2058 0 0 0 0 1 1 0 0 0 0 ... 1 0 2 1 0 0 2 0 2 0
NewsArticles-3016 0 0 1 0 0 0 0 0 0 0 ... 0 1 0 0 3 1 0 1 0 0
NewsArticles-3665 0 0 0 1 0 0 1 1 1 1 ... 1 0 0 0 0 0 1 0 0 0

5 rows × 516 columns

We can see that an index with the document labels was created and that the vocabulary tokens become the column names.

You can combine tmtoolkit with Gensim. The bow.dtm module provides several functions to convert data between both packages:

The bow.bow_stats module

This module provides several statistics and transformations for sparse or dense DTMs.

Document lengths, document and term frequencies, token co-occurrences

Let’s start with the doc_lengths function, which simply gives the number of tokens per document (i.e. the row-wise sum of the DTM):

[8]:
from tmtoolkit.bow.bow_stats import doc_lengths

doc_lengths(mat)
[8]:
array([ 38,  40, 330, 157, 354])

The returned array is aligned to the document labels labels so we can see that the last document, “NewsArticles-3665”, is the one with the most tokens. Or to do it computationally:

[9]:
labels[doc_lengths(mat).argmax()]
[9]:
'NewsArticles-3665'

While doc_lengths gives the row-wise sum across the DTM, term_frequencies gives the column-wise sum. This means it returns an array of the length of the vocabulary’s size where each entry in that array reflects the number of occurrences of the respective vocabulary token (aka term).

Let’s calculate that measure, get its maximum and the token type(s) for that maximum value:

[10]:
from tmtoolkit.bow.bow_stats import term_frequencies

term_freq = term_frequencies(mat)
(term_freq.max(), vocab[term_freq == term_freq.max()])
[10]:
(23, array(['medium'], dtype='<U16'))

It’s also possible to calculate the proportional frequency, i.e. normalize the counts by the overall number of tokens via proportions=1. Alternatively, proportions=2 gives you log proportions.

[11]:
term_prop = term_frequencies(mat, proportions=1)
vocab[term_prop >= 0.01]
[11]:
array(['candidate', 'eu', 'macron', 'medium', 'merkel', 'refugee'],
      dtype='<U16')

The function doc_frequencies returns how often each token in the vocabulary occurs at least n times per document. You can control n per parameter min_val which is set to 1 by default. The returned array is aligned with the vocabulary. Here, we calculate the document frequency with the default value min_val=1, extract the maximum document frequency and see which of the tokens in the vocab array reach the maximum document frequency:

[12]:
from tmtoolkit.bow.bow_stats import doc_frequencies

df = doc_frequencies(mat)
max_df = df.max()
max_df, vocab[df == max_df]
[12]:
(4, array(['minister'], dtype='<U16'))

It turns out that the maximum document frequency is 4 and only the token “minister” reaches that document frequency. This means only “minister” is mentioned across 4 documents at least once (because min_val is 1). Remember that during preprocessing, we removed all tokens that occur across all five documents, hence there can’t be a vocabulary token with a document frequency of 5.

Let’s see which vocabulary tokens occur within a single document at least 10 times:

[13]:
df = doc_frequencies(mat, min_val=10)
vocab[df > 0]
[13]:
array(['candidate', 'eu', 'macron', 'medium', 'merkel', 'refugee'],
      dtype='<U16')

We can also calculate the co-document frequency or token co-occurrence matrix via codoc_frequencies. This measures how often each pair of vocabulary tokens occurs at least n times together in the same document. Again, you can control n per parameter min_val which is set to 1 by default. The result is a sparse matrix of shape vocabulary size by vocabulary size. The columns and rows give the pairs of tokens from the vocabulary.

Let’s generate a co-document frequency matrix and convert it to a dense representation, because our further operations don’t support sparse matrices.

A co-document frequency matrix is symmetric along the diagonal, because co-occurrence between a pair (token1, token2) is always the same as between (token2, token1). We want to filter out the duplicate pairs and for that use np.triu to take only the upper triangle of the matrix, i.e. set all values in the lower triangle including the matrix diagonal to zero (k=1 does this):

[14]:
from tmtoolkit.bow.bow_stats import codoc_frequencies

codoc_mat = codoc_frequencies(mat).todense()
codoc_upper = np.triu(codoc_mat, k=1)
codoc_upper
[14]:
array([[0, 1, 0, ..., 0, 0, 1],
       [0, 0, 0, ..., 0, 0, 1],
       [0, 0, 0, ..., 1, 0, 0],
       ...,
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0],
       [0, 0, 0, ..., 0, 0, 0]])

Now we create a list that contains the pairs of tokens that occur together in at least two documents (codoc_upper > 1) together with their co-document frequency:

[15]:
interesting_pairs = [(vocab[t1], vocab[t2], codoc_upper[t1, t2])
                     for t1, t2 in zip(*np.where(codoc_upper > 1))]
# sort by codoc freq. in desc. order
sorted(interesting_pairs, key=lambda x: x[2], reverse=True)
[15]:
[('government', 'minister', 3),
 ('minister', 'time', 3),
 ('access', 'channel', 2),
 ('access', 'day', 2),
 ('access', 'minister', 2),
 ('access', 'news', 2),
 ('april', 'author', 2),
 ('april', 'co', 2),
 ('april', 'critic', 2),
 ('april', 'distribution', 2),
 ('april', 'heart', 2),
 ('april', 'law', 2),
 ('april', 'minister', 2),
 ('april', 'policy', 2),
 ('april', 'question', 2),
 ('april', 'right', 2),
 ('april', 'state', 2),
 ('april', 'support', 2),
 ('april', 'system', 2),
 ('april', 'time', 2),
 ...]

Generate sorted lists and datatables according to term frequency

When working with DTMs, it’s often helpful to rank terms per document according to their frequency. This is what sorted_terms does for you. It further allows to specify the sorting order (the default is descending order via ascending=False) and several limits:

  • lo_thresh for the minimum term frequency

  • hi_thresh for the maximum term frequency

  • top_n for the maximum number of terms per document

Let’s display the top three tokens per document by frequency:

[16]:
from tmtoolkit.bow.bow_stats import sorted_terms

sorted_terms(mat, vocab, top_n=3)
[16]:
[[('day', 3), ('nhs', 2), ('bbc', 2)],
 [('car', 4), ('garda', 4), ('collision', 3)],
 [('merkel', 14), ('refugee', 13), ('eu', 13)],
 [('politic', 7), ('party', 6), ('farron', 5)],
 [('medium', 23), ('candidate', 19), ('macron', 15)]]

The output is a list for each document (this means the output is aligned with the document labels doc_labels), with three pairs of (token, frequency) each. It’s also possible to get this data as dataframe via sorted_terms_table, which gives a better overview and also includes labels for the documents. It accepts the same parameters for sorting and limitting the results:

[17]:
from tmtoolkit.bow.bow_stats import sorted_terms_table

sorted_terms_table(mat, vocab, labels, top_n=3)
[17]:
token value
doc rank
NewsArticles-119 1 day 3
2 nhs 2
3 bbc 2
NewsArticles-1206 1 car 4
2 garda 4
3 collision 3
NewsArticles-2058 1 merkel 14
2 refugee 13
3 eu 13
NewsArticles-3016 1 politic 7
2 party 6
3 farron 5
NewsArticles-3665 1 medium 23
2 candidate 19
3 macron 15
[18]:
sorted_terms_table(mat, vocab, labels, lo_thresh=5)
[18]:
token value
doc rank
NewsArticles-2058 1 merkel 14
2 refugee 13
3 eu 13
4 germany 8
5 country 8
6 turkey 6
7 europe 6
NewsArticles-3016 1 politic 7
2 party 6
NewsArticles-3665 1 medium 23
2 candidate 19
3 macron 15
4 france 9
5 election 9
6 le 7
7 coverage 6

Term frequency–inverse document frequency transformation (tf-idf)

Term frequency–inverse document frequency transformation (tf-idf) is a matrix transformation that is often applied to DTMs in order to reflect the importance of a token to a document. The bow_stats module provides the function tfidf for this. When the input is a sparse matrix, and the calculation supports operating on sparce matrices, the output will also be a sparse matrix, which means that the tf-idf transformation is implemented in a very memory-efficient way.

Let’s apply tf-idf to our DTM using the default way:

[19]:
from tmtoolkit.bow.bow_stats import tfidf

tfidf_mat = tfidf(mat)
tfidf_mat
[19]:
<5x516 sparse matrix of type '<class 'numpy.float64'>'
        with 576 stored elements in COOrdinate format>

We can see that the output is a sparse matrix. Let’s have a look at its values:

[20]:
tfidf_mat.todense()
[20]:
matrix([[0.     , 0.     , 0.     , ..., 0.     , 0.     , 0.     ],
        [0.03132, 0.03132, 0.     , ..., 0.     , 0.     , 0.06264],
        [0.     , 0.     , 0.     , ..., 0.     , 0.00759, 0.     ],
        [0.     , 0.     , 0.00798, ..., 0.00798, 0.     , 0.     ],
        [0.     , 0.     , 0.     , ..., 0.     , 0.     , 0.     ]])

Of course we can also pass this matrix to sorted_terms_table and observe that some rankings have changed in comparison to the untransformed DTM:

[21]:
sorted_terms_table(tfidf_mat, vocab, labels, top_n=3)
[21]:
token value
doc rank
NewsArticles-119 1 day 0.077434
2 bbc 0.065935
3 victoria 0.065935
NewsArticles-1206 1 car 0.125276
2 garda 0.125276
3 collision 0.093957
NewsArticles-2058 1 merkel 0.053148
2 refugee 0.049351
3 eu 0.038639
NewsArticles-3016 1 politic 0.055856
2 farron 0.039897
3 party 0.037484
NewsArticles-3665 1 medium 0.081394
2 candidate 0.067239
3 macron 0.053083

The tf-idf matrix is calculated from a DTM \(D\) as \(\textit{tf}(D) \cdot \textit{idf}(D)\).

There are different variants for how to calculate the term frequency \(\textit{tf}(D)\) and the inverse document frequency \(\textit{idf(D)}\). The package tmtoolkit contains several functions that implement some of these variants. For \(\text{tf()}\) these are:

  • tf_binary: binary term frequency matrix (matrix contains 1 whenever a term occurred in a document, else 0)

  • tf_proportions: proportional term frequency matrix (term counts are normalized by document length)

  • tf_log: log-normalized term frequency matrix (by default \(\log(1 + D)\))

  • tf_double_norm: double-normalized term frequency matrix \(K + (1-K) \cdot \frac{D}{\textit{rowmax}(D)}\), where \(\textit{rowmax}(D)\) is a vector containing the maximum term count per document

As you can see, all the term frequency functions are prefixed with a tf_. There are also two variants for \(\textit{idf()}\):

  • idf: calculates \(\log(\frac{a + N}{b + \textit{df}(D)})\) where \(a\) and \(b\) are smoothing constants, \(N\) is the number of documents and \(\textit{df}(D)\) calculates the document frequency

  • idf_probabilistic: calculates \(\log(a + \frac{N - \textit{df}(D)}{\textit{df}(D)})\)

The term frequency functions always return a sparse matrix if possible and if the input is sparse. Let’s try out two term frequency functions:

[22]:
from tmtoolkit.bow.bow_stats import tf_binary, tf_proportions

tf_binary(mat).todense()
[22]:
matrix([[0, 0, 0, ..., 0, 0, 0],
        [1, 1, 0, ..., 0, 0, 1],
        [0, 0, 0, ..., 0, 1, 0],
        [0, 0, 1, ..., 1, 0, 0],
        [0, 0, 0, ..., 0, 0, 0]])
[23]:
tf_proportions(mat).todense()
[23]:
matrix([[0.     , 0.     , 0.     , ..., 0.     , 0.     , 0.     ],
        [0.025  , 0.025  , 0.     , ..., 0.     , 0.     , 0.05   ],
        [0.     , 0.     , 0.     , ..., 0.     , 0.00606, 0.     ],
        [0.     , 0.     , 0.00637, ..., 0.00637, 0.     , 0.     ],
        [0.     , 0.     , 0.     , ..., 0.     , 0.     , 0.     ]])

Just like the document frequency function doc_frequencies, the inverse document frequency functions also return a vector with the same length as the vocabulary. Let’s use these functions and have a look at the inverse document frequency of certain tokens:

[24]:
from tmtoolkit.bow.bow_stats import idf, idf_probabilistic

idf_vec = idf(mat)
list(zip(vocab, idf_vec))[:10]
[24]:
[('110pm', 1.252762968495368),
 ('70', 1.252762968495368),
 ('abuse', 1.252762968495368),
 ('access', 0.9808292530117262),
 ('accession', 1.252762968495368),
 ('accusation', 1.252762968495368),
 ('act', 1.252762968495368),
 ('addition', 1.252762968495368),
 ('address', 1.252762968495368),
 ('administration', 1.252762968495368)]
[25]:
probidf_vec = idf_probabilistic(mat)

list(zip(vocab, probidf_vec))[:10]
[25]:
[('110pm', 1.6094379124341003),
 ('70', 1.6094379124341003),
 ('abuse', 1.6094379124341003),
 ('access', 0.916290731874155),
 ('accession', 1.6094379124341003),
 ('accusation', 1.6094379124341003),
 ('act', 1.6094379124341003),
 ('addition', 1.6094379124341003),
 ('address', 1.6094379124341003),
 ('administration', 1.6094379124341003)]

Note that due to our very small sample, there’s not much variation in the inverse document frequency values.

By default, tfidf uses tf_proportions and idf to calculate the tf-idf matrix. You can plug in other functions to get other variants of tf-idf:

[26]:
from tmtoolkit.bow.bow_stats import tf_double_norm

# we also set a "K" parameter for "tf_double_norm"
tfidf_mat2 = tfidf(mat, tf_func=tf_double_norm,
                   idf_func=idf_probabilistic, K=0.25)
tfidf_mat2
[26]:
array([[0.40236, 0.40236, 0.40236, ..., 0.40236, 0.40236, 0.40236],
       [0.70413, 0.70413, 0.40236, ..., 0.40236, 0.40236, 1.0059 ],
       [0.40236, 0.40236, 0.40236, ..., 0.40236, 0.5748 , 0.40236],
       [0.40236, 0.40236, 0.5748 , ..., 0.5748 , 0.40236, 0.40236],
       [0.40236, 0.40236, 0.40236, ..., 0.40236, 0.40236, 0.40236]])
[27]:
sorted_terms_table(tfidf_mat2, vocab, labels, top_n=3)
[27]:
token value
doc rank
NewsArticles-119 1 bbc 1.207078
2 nhs 1.207078
3 victoria 1.207078
NewsArticles-1206 1 car 1.609438
2 garda 1.609438
3 collision 1.307668
NewsArticles-2058 1 merkel 1.609438
2 refugee 1.523218
3 germany 1.092119
NewsArticles-3016 1 politic 1.609438
2 farron 1.264558
3 putin 1.092119
NewsArticles-3665 1 medium 1.609438
2 candidate 1.399511
3 macron 1.189585

Once we have generated a DTM, we can use it for topic modeling. The next chapter will show how tmtoolkit can be used to evaluate the quality of your model, export essential information from it and visualize the results.