Fakeddit

Fake news has altered society in negative ways in politics and culture. It has adversely affected both online social network systems as well as offline communities and conversations. Using automatic machine learning classification models is an efficient way to combat the widespread dissemination of fake news. However, a lack of effective, comprehensive datasets has been a problem for fake news research and detection model development. Prior fake news datasets do not provide multimodal text and image data, metadata, comment data, and fine-grained fake news categorization at the scale and breadth of our dataset. We present Fakeddit, a novel multimodal dataset consisting of over 1 million samples from multiple categories of fake news. After being processed through several stages of review, the samples are labeled according to 2-way, 3-way, and 6-way classification categories through distant supervision. We construct hybrid text+image models and perform extensive experiments for multiple variations of classification, demonstrating the importance of the novel aspect of multimodality and fine-grained classification unique to Fakeddit.

Fakeddit

Challenge

We have a Fakeddit Multimodal Fake News Detection Challenge that aims to benchmark progress towards models that can accurately detect specific types of fake news in text and images. We have a few awards for the winners.

Important Dates

  • Start Date: 8/26/2020

Guidelines

The aim of Fakeddit is to provide a fine-grained multimodal fake news detection dataset and advance efforts to combat the spread of misinformation in multiple modalities. As such, we require those using our dataset to adhere to these guidelines:

  • Only use the “6_way_label” and the “clean_title” columns from the public dataset linked on the Github page.
  • Do not use additional paired text/image data.
  • Do not attempt to extract ground truth labels for our samples on the Internet.
  • Disregarding these guidelines to improve evaluation scores is unethical and will not help improve future research in the area.

Metrics

We evaluate the detection models with accuracy. Specifically, we measure the percentage of text/image pairs the model is able to correctly classify from the total test set.

Ranking

The ranking for the competition is based on the evaluation metric. A team with a greater accuracy will be ranked higher than one with a lower accuracy.

Download

You can access public dataset containing train/validation/test files for training your model here: https://github.com/entitize/Fakeddit

The private text test set is listed here: https://drive.google.com/file/d/1ExPiA_v2Dq_rG6afY4rvgUV-Jh9ByiTI/view?usp=sharing

The private image test set is listed here: https://drive.google.com/file/d/1YtvM2Muf4hT0SCALI7FaILaGPJdW7lW1/view?usp=sharing

Paper

Read our LREC paper for more details here

If you use this dataset, please cite

	
@article{nakamura2019r,
    title={r/Fakeddit: A New Multimodal Benchmark Dataset for Fine-grained Fake News Detection},
    author={Nakamura, Kai and Levy, Sharon and Wang, William Yang},
    journal={arXiv preprint arXiv:1911.03854},
    year={2019}
}