tmtoolkit: Text mining and topic modeling toolkit
tmtoolkit is a set of tools for text mining and topic modeling with Python developed especially for the use in the social sciences, linguistics, journalism or related disciplines. It aims for easy installation, extensive documentation and a clear programming interface while offering good performance on large datasets by the means of vectorized operations (via NumPy) and parallel computation (using Python’s multiprocessing module and the loky package). The basis of tmtoolkit’s text mining capabilities are built around SpaCy, which offers many language models. Currently, the following languages are supported for text mining:
Catalan
Chinese
Croatian
Danish
Dutch
English
Finnish
French
German
Greek
Italian
Japanese
Korean
Lithuanian
Macedonian
Norwegian Bokmål
Polish
Portuguese
Romanian
Russian
Spanish
Swedish
Ukrainian
The documentation for tmtoolkit is available on tmtoolkit.readthedocs.org and the GitHub code repository is on github.com/internaut/tmtoolkit.
Features
Text preprocessing and text mining
The tmtoolkit package offers several text preprocessing and text mining methods, including:
tokenization, sentence segmentation, part-of-speech (POS) tagging, named-entity recognition (NER) (via SpaCy)
extensive pattern matching capabilities (exact matching, regular expressions or “glob” patterns) to be used in many methods of the package, e.g. for filtering on token or document level, or for keywords-in-context (KWIC)
adding and managing custom document and token attributes
accessing text corpora along with their document and token attributes as dataframes
calculating and visualizing corpus summary statistics
finding out and joining collocations
calculating token cooccurrences
generating n-grams and using N-gram models
generating sparse document-term matrices
Wherever possible and useful, these methods can operate in parallel to speed up computations with large datasets.
Topic modeling
model computation in parallel for different copora and/or parameter sets
support for lda, scikit-learn and gensim topic modeling backends
evaluation of topic models (e.g. in order to an optimal number of topics for a given dataset) using several implemented metrics:
model coherence (Mimno et al. 2011) or with metrics implemented in Gensim)
KL divergence method (Arun et al. 2010)
probability of held-out documents (Wallach et al. 2009)
pair-wise cosine distance method (Cao Juan et al. 2009)
harmonic mean method (Griffiths, Steyvers 2004)
the loglikelihood or perplexity methods natively implemented in lda, sklearn or gensim
common statistics for topic models such as word saliency and distinctiveness (Chuang et al. 2012), topic-word relevance (Sievert and Shirley 2014)
export estimated document-topic and topic-word distributions to Excel
visualize topic-word distributions and document-topic distributions as word clouds or heatmaps
model coherence (Mimno et al. 2011) for individual topics
integrate PyLDAVis to visualize results
Other features
loading and cleaning of raw text from text files, tabular files (CSV or Excel), ZIP files or folders
common statistics and transformations for document-term matrices like word cooccurrence and tf-idf
Limits
only languages are supported, for which SpaCy language models are available
all data must reside in memory, i.e. no streaming of large data from the hard disk (which for example Gensim supports)
Built-in datasets
Currently tmtoolkit comes with the following built-in datasets which can be loaded via
from_builtin_corpus
:
“en-NewsArticles”: News Articles (Dai, Tianru, 2017, “News Articles”, https://doi.org/10.7910/DVN/GMFCTR, Harvard Dataverse, V1)
random samples from ParlSpeech V2 (Rauh, Christian; Schwalbach, Jan, 2020, “The ParlSpeech V2 data set: Full-text corpora of 6.3 million parliamentary speeches in the key legislative chambers of nine representative democracies”, https://doi.org/10.7910/DVN/L4OAKN, Harvard Dataverse) for different languages:
“de-parlspeech-v2-sample-bundestag”
“en-parlspeech-v2-sample-houseofcommons”
“es-parlspeech-v2-sample-congreso”
“nl-parlspeech-v2-sample-tweedekamer”
“en-healthtweets”: Health News in Twitter Data Set
About this documentation
This documentation guides you in several chapters from installing tmtoolkit to its specific use cases and shows some examples with built-in corpora and other datasets. All “hands on” chapters from Getting started to Topic modeling are generated from Jupyter Notebooks. If you want to follow along using these notebooks, you can download them from the GitHub repository.
There are also a few other examples as plain Python scripts available in the examples folder of the GitHub repository.
License
Code licensed under Apache License 2.0. See LICENSE file.
Contents:
- Installation
- Getting started
- Working with text corpora
- Text preprocessing and basic text mining
- Optional: enabling logging output
- Loading example data
- Accessing tokens and token attributes
- Corpus vocabulary
- Visualizing corpus summary statistics
- Text processing: transforming documents and tokens
- Aside: A
Corpus
object as “state machine” - Lemmatization and token normalization
- Identifying and joining token collocations
- Visualizing corpus statistics of the transformed corpus
- Accessing the corpus documents as SpaCy documents
- Keywords-in-context (KWIC) and general filtering methods
- Token cooccurrence matrices
- Working with document and token attributes
- Generating n-grams
- Generating a sparse document-term matrix (DTM)
- Serialization: Saving and loading
Corpus
objects
- Aside: A
- Working with the Bag-of-Words representation
- Topic modeling
- Interoperability with R
- API
- tmtoolkit.bow
- tmtoolkit.corpus
- Corpus class and corpus functions
Corpus
Document
builtin_corpora_info
corpus_add_files
corpus_add_folder
corpus_add_tabular
corpus_add_zip
corpus_collocations
corpus_join_documents
corpus_ngramify
corpus_num_chars
corpus_num_tokens
corpus_retokenize
corpus_sample
corpus_split_by_paragraph
corpus_split_by_token
corpus_summary
corpus_tokens_flattened
corpus_unique_chars
deserialize_corpus
doc_frequencies
doc_labels
doc_labels_sample
doc_lengths
doc_num_sents
doc_sent_lengths
doc_texts
doc_token_lengths
doc_tokens
doc_vectors
document_from_attrs
document_token_attr
dtm
filter_clean_tokens
filter_documents
filter_documents_by_docattr
filter_documents_by_label
filter_documents_by_length
filter_documents_by_mask
filter_for_pos
filter_tokens
filter_tokens_by_doc_frequency
filter_tokens_by_mask
filter_tokens_with_kwic
find_documents
join_collocations_by_patterns
join_collocations_by_statistic
kwic
kwic_table
lemmatize
load_corpus_from_picklefile
load_corpus_from_tokens
load_corpus_from_tokens_table
ngrams
normalize_unicode
numbers_to_magnitudes
print_summary
remove_chars
remove_common_tokens
remove_document_attr
remove_documents
remove_documents_by_docattr
remove_documents_by_label
remove_documents_by_length
remove_documents_by_mask
remove_punctuation
remove_token_attr
remove_tokens
remove_tokens_by_mask
remove_uncommon_tokens
save_corpus_to_picklefile
serialize_corpus
set_document_attr
set_token_attr
simplified_pos
simplify_unicode
spacydocs
to_lowercase
to_uppercase
token_cooccurrence
token_vectors
tokens_table
transform_tokens
vocabulary
vocabulary_counts
vocabulary_size
- Functions to visualize corpus summary statistics
- Corpus class and corpus functions
- tmtoolkit.ngrammodels
- tmtoolkit.strings
- tmtoolkit.tokenseq
Counter
collapse_tokens
copy
empty_chararray
index_windows_around_matches
indices_of_matches
npmi
pad_sequence
pmi
pmi2
pmi3
ppmi
token_collocation_matrix
token_collocations
token_hash_convert
token_join_subsequent
token_lengths
token_match
token_match_multi_pattern
token_match_subsequent
token_ngrams
unique_chars
- tmtoolkit.topicmod
- Evaluation metrics for Topic Modeling
- Printing, importing and exporting topic model results
ldamodel_full_doc_topics
ldamodel_full_topic_words
ldamodel_top_doc_topics
ldamodel_top_topic_docs
ldamodel_top_topic_words
ldamodel_top_word_topics
load_ldamodel_from_pickle
print_ldamodel_distribution
print_ldamodel_doc_topics
print_ldamodel_topic_words
save_ldamodel_summary_to_excel
save_ldamodel_to_pickle
- Statistics for topic models and BoW matrices
exclude_topics
filter_topics
generate_topic_labels_from_top_words
least_distinct_words
least_probable_words
least_relevant_words_for_topic
least_salient_words
marginal_topic_distrib
marginal_word_distrib
most_distinct_words
most_probable_words
most_relevant_words_for_topic
most_salient_words
top_n_from_distribution
top_words_for_topics
topic_word_relevance
word_distinctiveness
word_saliency
- Parallel model fitting and evaluation with lda
- Parallel model fitting and evaluation with scikit-learn
- Parallel model fitting and evaluation with Gensim
- Visualize topic models and topic model evaluation results
- Base classes for parallel model fitting and evaluation
- tmtoolkit.utils
applychain
argsort
as_chararray
chararray_elem_size
check_context_size
combine_sparse_matrices_columnwise
dict2df
disable_logging
empty_chararray
enable_logging
flatten_list
greedy_partitioning
indices_of_matches
linebreaks_win2unix
mat2d_window_from_indices
merge_dicts
merge_sets
pairwise_max_table
partial_sparse_log
path_split
pickle_data
read_text_file
sample_dict
set_logging_level
sorted_df
split_func_args
unpickle_file
- Development
- Version history
- 0.12.0 - 2023-05-03
- 0.11.2 - 2022-03-11
- 0.11.1 - 2022-02-10
- 0.11.0 - 2022-02-08
- 0.10.0 - 2020-08-03
- 0.9.0 - 2019-12-20
- 0.8.0 - 2019-02-05
- 0.7.3 - 2018-09-17 (last release to support Python 2.7)
- 0.7.2 - 2018-07-23
- 0.7.1 - 2018-06-18
- 0.7.0 - 2018-06-18
- 0.6.3 - 2018-06-01
- 0.6.2 - 2018-04-27
- 0.6.1 - 2018-04-27
- 0.6.0 - 2018-04-25
- 0.5.0 - 2018-02-13
- 0.4.2 - 2018-02-06
- 0.4.1 - 2018-01-24
- 0.4.0 - 2018-01-18
- References