Reproducible text analysis with Topic Modeling
Abstract
Topic Modeling is a popular text mining method for finding the central topics in large collections of texts. In this process, an algorithm identifies groups of words that frequently occur together in the texts. These groups of words are called "topics". Since text collections of any size can thus be evaluated automatically, topic modeling can be an insightful tool for various text-based applications, such as social media studies or psychotherapy research.
Even though Topic Modeling is an "unsupervised machine learning" technique, many parameter decisions have to be made by the person doing the analysis. Since these decisions can have strong effects on the results and are partly based on random numbers, good documentation and freely available analysis code are crucial for reproducible Topic Modeling.
In this introductory demonstration, the established topic modeling variant "Latent Dirichlet Allocation" is presented and applied to a freely available dataset. Special emphasis is placed on topic validity and topic reliability - two often overlooked but important model properties. An example is used to show how transparent and detailed code can make the analysis reproducible.
A brief introduction to PsychTopics (psychtopics.org), ZPID's open-source tool for exploring psychological research topics and trends, is also provided. This uses a novel topic modeling approach to dynamically identify topics in psychological publications and interactively display them in an R Shiny app.
Speaker: André Bittermann is acting head of the Big Data research area at ZPID and product manager for PsychTopics.
Links to material
Presentation slides:
http://dx.doi.org/10.23668/psycharchives.8382
Video:
https://zpid.cloud.panopto.eu/Panopto/Pages/Viewer.aspx?id=73b29bbb-34f2-4e29-9e8e-af8d00b1f133
GitHub-Project:
https://github.com/abitter/PTOS
Tool:
http://psychtopics.org