Cyberharassment (e.g., cyberbullying and cyberhate) is becoming prevalent with the rise of social media. It is crucial to automatically detect such content online to help protect the targeted group(s). The EAGER project is a set of hands-on labs that teaches students how to utilize machine learning to solve this social problem.
This lab teaches students how AI models can be used to distinguish between a cyberbully and non-cyberbully text-based content. Students will learn data preprocessing, training, and the evaluation metrics of AI-based classifiers.
Cyberbullying can occur in images, and it can also occur in both image and text. Students will extract visual features from images and combine these features with textual features to detect cyberbullying in this lab.
AI models are vulnerable to adversarial attacks. In this lab, students will use different algorithms to generate images that can fool models trained to detect cyberbullying, causing the model to produce incorrect output.
AI models such as cyberbully detection models trained using biased pre-trained word embeddings can propagate such bias, which may lead to an undesired output. Students will learn about bias and debias word embeddings in this lab.