Datasets:

ArXiv:
License:
Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 3 new columns ({'in_deg', 'out_deg', 'domain'}) and 2 missing columns ({'dst', 'src'}).

This happened while the csv dataset builder was generating data using

gzip://vertices.csv::hf://datasets/ComplexDataLab/CrediBench@ac1d381dbc9f4a8d11649572c20afaba91751159/april2025/vertices.csv.gz

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1831, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              domain: string
              ts: int64
              in_deg: int64
              out_deg: int64
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 696
              to
              {'src': Value('string'), 'dst': Value('string'), 'ts': Value('int64')}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1334, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 911, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1702, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1833, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 3 new columns ({'in_deg', 'out_deg', 'domain'}) and 2 missing columns ({'dst', 'src'}).
              
              This happened while the csv dataset builder was generating data using
              
              gzip://vertices.csv::hf://datasets/ComplexDataLab/CrediBench@ac1d381dbc9f4a8d11649572c20afaba91751159/april2025/vertices.csv.gz
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

src
string
dst
string
ts
int64
abb.careers
abb.global
20,250,519
abb.careers
com.abb
20,250,519
abb.careers
com.abb.e-mobility
20,250,519
abb.careers
com.abb.new
20,250,519
abb.careers
com.abb.us.electrification
20,250,519
abb.careers
com.aspnetcdn.ajax
20,250,519
abb.careers
com.blueadvantagearkansas
20,250,519
abb.careers
com.cloudflare.cdnjs
20,250,519
abb.careers
com.dropbox
20,250,519
abb.careers
com.facebook
20,250,519
abb.careers
com.google.apis
20,250,519
abb.careers
com.googletagmanager
20,250,519
abb.careers
com.instagram
20,250,519
abb.careers
com.linkedin
20,250,519
abb.careers
com.phenompeople.assets
20,250,519
abb.careers
com.phenompeople.cdn
20,250,519
abb.careers
com.phenompeople.cdn-prod-static
20,250,519
abb.careers
com.phenompeople.pp-cdn
20,250,519
abb.careers
com.qq.weixin.mp
20,250,519
abb.careers
com.twitter
20,250,519
abb.careers
com.youtube
20,250,519
abb.careers
in.lnkd
20,250,519
abb.careers
net.jsdelivr.cdn
20,250,519
abb.careers
net.live.js
20,250,519
abb.careers
org.cems
20,250,519
abb.careers
org.eu.best
20,250,519
abb.careers
org.unitech-international
20,250,519
abb.global
com.abb.new
20,250,519
abb.global
com.adobedtm.assets
20,250,519
abb.global
com.linkedin
20,250,519
abb.mondo
abb.careers
20,250,519
abb.mondo
be.youtu
20,250,519
abb.mondo
com.abb.new
20,250,519
abb.mondo
com.energyefficiencymovement
20,250,519
abb.mondo
com.facebook
20,250,519
abb.mondo
com.facebook.it-it
20,250,519
abb.mondo
com.googletagmanager
20,250,519
abb.mondo
com.linkedin
20,250,519
abb.mondo
com.themeditelegraph
20,250,519
abb.mondo
com.twitter
20,250,519
abb.mondo
com.website-files.prod.cdn
20,250,519
abb.mondo
com.x
20,250,519
abb.mondo
com.youtube
20,250,519
abb.mondo
it.corriere
20,250,519
abb.mondo
it.elettricomagazine
20,250,519
abb.mondo
it.industriaitaliana
20,250,519
abb.mondo
it.milanofinanza.video
20,250,519
abb.mondo
it.publiteconline
20,250,519
abb.mondo
net.cloudfront.d3e54v103j8qbb
20,250,519
abb.mondo
net.jsdelivr.cdn
20,250,519
abb.mondo
one.apsis.form
20,250,519
abbott.ch
abbott.lifetothefullest
20,250,519
abbott.ch
com.abbott
20,250,519
abbott.ch
com.adobedtm.assets
20,250,519
abbott.ch
com.facebook
20,250,519
abbott.ch
com.fortune
20,250,519
abbott.ch
com.huffingtonpost.live
20,250,519
abbott.ch
com.instagram
20,250,519
abbott.ch
com.linkedin
20,250,519
abbott.ch
com.mediaroom.abbott
20,250,519
abbott.ch
com.modernmedicine.ophthalmologytimes
20,250,519
abbott.ch
com.nbcnews
20,250,519
abbott.ch
com.trustarc.consent
20,250,519
abbott.ch
com.twitter
20,250,519
abbott.ch
com.youtube
20,250,519
abbott.ch
org.nsbri
20,250,519
abbott.diagnostics.us.cloud
abbott.corelaboratory
20,250,519
abbott.diagnostics.us.cloud
com.abbott
20,250,519
abbott.diagnostics.us.cloud
net.typekit.use
20,250,519
abbott.ensure.es
abbott.ensure
20,250,519
abbott.ensure.es
abbott.es
20,250,519
abbott.ensure.es
abbott.family
20,250,519
abbott.ensure.es
abbott.pediasure
20,250,519
abbott.ensure.es
ca.ensure
20,250,519
abbott.ensure.es
cat.bonpreuesclat.compraonline
20,250,519
abbott.ensure.es
com.adobedtm.assets
20,250,519
abbott.ensure.es
com.arenal
20,250,519
abbott.ensure.es
com.atida
20,250,519
abbott.ensure.es
com.dosfarma
20,250,519
abbott.ensure.es
com.ensure
20,250,519
abbott.ensure.es
com.facebook
20,250,519
abbott.ensure.es
com.farmaciasdirect
20,250,519
abbott.ensure.es
com.farmaciastrebol
20,250,519
abbott.ensure.es
com.farmavazquez
20,250,519
abbott.ensure.es
com.googleapis.maps
20,250,519
abbott.ensure.es
com.instagram
20,250,519
abbott.ensure.es
com.marvimundo
20,250,519
abbott.ensure.es
com.nutritienda
20,250,519
abbott.ensure.es
es.alcampo.compraonline
20,250,519
abbott.ensure.es
es.amazon
20,250,519
abbott.ensure.es
es.carrefour
20,250,519
abbott.ensure.es
es.druni
20,250,519
abbott.ensure.es
es.elcorteingles
20,250,519
abbott.ensure.es
es.eroski.supermercado
20,250,519
abbott.ensure.es
es.farmacia4estaciones
20,250,519
abbott.ensure.es
es.hipercor
20,250,519
abbott.ensure.es
es.mifarma
20,250,519
abbott.ensure.es
eu.primor
20,250,519
abbott.ensure.es
net.jsdelivr.cdn
20,250,519
abbott.family
abbott.nutritionnews
20,250,519
End of preview.

Dataset Card for CrediBench 1.1

CrediBench 1.1 is a large-scale, temporal webgraph constituted of web data pulled from Common Crawl. A prior version of the paper is available here (NPGML workshop @ NeurIPS 2025), with the latest version still under review. CrediBench 1.0, presented in this prior work, constituted of a static webgraph with 1 month's data, while the current version contains 3 months of data (October to December 2024, surrounding the U.S Federal elections, a period of increased misinformation). We are actively constructing and uploading more monthly graphs as well.

Dataset Details

Dataset Description

This dataset is composed of monthly slices of large-scale web networks. These webgraphs contain 1+ billion edges, and 45+ million nodes per month. In these webgraphs, the nodes represent a website domain (e.g, google.com) and an edge represents a directed hyperlink relation (e.g, an edge from cbc.ca to reuters.com indicates that a page on cbc.ca's website contains a hyperlink to a reuters.com page). These webgraphs are supplemented with text attributes, partly from Common Crawl and from web scraping, as text features play an important role in misinformation detection. Additionally, we supplement them with credibility scores as made available by Lin et al., to enable supervised and semi-supervised learning as explained in our paper.

  • Curated by a team of collaborators from the Complex Data Lab @ Mila - Quebec AI Institute, the University of Oxford, McGill University, Concordia University, UC Berkeley, University of Montreal, and AITHYRA.
  • Funding: This research was supported by the Engineering and Physical Sciences Research Council (EPSRC) and the AI Security Institute (AISI) grant: Towards Trustworthy AI Agents for Information Veracity and the EPSRC Turing AI World-Leading Research Fellowship No. EP/X040062/1 and EPSRC AI Hub No. EP/Y028872/1. This research was also enabled in part by compute resources provided by Mila (mila.quebec) and Compute Canada.
  • License: CC-BY-4.0 (as retributed from Common Crawl).

Dataset Statistics:

Month V E Min. deg. Mean deg. Max. deg. Leaves (deg. = 1) Edge Density
October 2024 50,288,479 1,074,971,387 1 42.75 17,112,352 30,278 4.3e-07
November 2024 27,567,417 555,905,375 1 40.33 9,019,038 30,553 7.3e-07
December 2024 45,030,252 1,014,523,551 1 45.06 14,719,077 28,857 5.0e-07
February 2025 49,639,664 1,167,748,533 1 47.05 17,078,954 24,430 4.7e-07
March 2025 50,162,733 1,212,826,396 1 48.36 16,691,193 22,629 4.8e-07
April 2025 17,998,846 349,717,108 1 38.86 5,284,367 25,606 1.1e-06

Resources

Uses

This dataset is intended as a data source for research efforts against misinformation online. Specifically, as the first large-scale, text-attributed webgraph that is also dynamic, CrediBench stands as an ideal data source for efforts to develop methods for unreliable domain detection based on spatio-temporal cues.

Out-of-Scope Use

This dataset is not intended for LLM training. Designed for the goal of misinformation detection at the domain level and web scale, this dataset contains numerous domains and content pages that contain innapropriate content which may be harmful if used for training conversational AI, or other types of generative AI outside the scope of our task.

Data Collection and Processing

The process of collection, processing and use is detailed in our team's paper. We collect data through our proposed CrediBench pipeline (available at our repository), which builds a month's worth of data by pulling from Common Crawl, builds the graph from it and processes it to discard isolated and low-degree nodes. Each edge has a timestamp, given as the date of the first day of week of the crawl, in format YYYYMMDD.

Citation

BibTeX:

@article{kondrupsabry2025credibench,
  title={{CrediBench: Building Web-Scale Network Datasets for Information Integrity}},
  author={Kondrup, Emma and Sabry, Sebastian and Abdallah, Hussein and Yang, Zachary and Zhou, James and Pelrine, Kellin and Godbout, Jean-Fran{\c{c}}ois and Bronstein, Michael and Rabbany, Reihaneh and Huang, Shenyang},
  journal={arXiv preprint arXiv:2509.23340},
  year={2025},
  note={New Perspectives in Graph Machine Learning Workshop @ NeurIPS 2025},
  url={https://arxiv.org/abs/2509.23340}
}

APA:

Kondrup, E., Sabry, S., Abdallah, H., Yang, Z., Zhou, J., Pelrine, K., Godbout, J.-F., Bronstein, M., Rabbany, R., & Huang, S. (2025).
CrediBench: Building Web-Scale Network Datasets for Information Integrity.
New Perspectives in Graph Machine Learning Workshop @ NeurIPS 2025. arXiv:2509.23340. https://arxiv.org/pdf/2509.23340

Dataset Card Authors / Contact

For any questions on the dataset, please contact Emma Kondrup, Sebastian Sabry, or Shenyang (Andy) Huang.

Downloads last month
72