Dataset
The dataset contains 10'000 classified comments which could be either normal, toxic, obscene or toxic and obscene. In total, there were 4'963 comments that were neither toxic nor obscene. This dataset was an excerpt form a bigger dataset provided by Jigsaw for its Toxic Comment Classification Challenge.
The following word clouds show the most frequent words used in normal comments (left) and the most frequent words used in comments that are both toxic and obscene. 
wordcloud for normal comments
wordcloud for normal comments
wordcloud for toxic & obscene comments
wordcloud for toxic & obscene comments
Approach

I tried out three different classifiers to find my optimal model: Regularised regression, random forest, and a support vector machine. It turned out that a lasso regression performed best in my case, as it was the best trade-off between accuracy and computational complexity (for fairness, we had to run the models on our personal computers). I used grid search to optimise lambda (the tuning parameter of the lasso/ridge models).
As we used a bag-of-words representation, I also used grid search for the text preprocessing choices (for example weight, removal of stopwords, punctuation, whitespace etc.). Furthermore, I added a few new features, such as measures for latent concepts like "anger" or "disgust" or lexical complexity. This resulted in my final model, which was the third-best in my class and was only marginally less accurate than that presented by our professor.
Unfortunately, I am prohibited from sharing my code for this assignment, as the school might want to reuse it in the future.
Further development
I was intrigued by the assignment, but a bit frustrated by the rather small dataset, which restricted the model choice (i.e. its not large enough for a neural network). Thus, I decided to download the full dataset and train a neural network on that data, hoping to improve the classification. The original dataset contains 150'000 classified comments and additional classes, such as threat or identity hate.
Using simple preprocessing steps (and a bag-of-words representation), I trained a neural net using PyTorch and achieved an accuracy of 0.96.
I am currently rerunning and improving the code and will publish it on my GitHub page soon.
_____________

Methods Used
Regularised Regression, Random Forest, Support Vector Machine (SVM), Neural Network (MLP)
Back to Top