Summarize long texts
This dataset provides long prompts intended to use for testing language models with long inputs with different sizes.
Columns
- id: numerical id
- source: the source of the prompt text
- length: indication of length of the prompt
- text: the text of the prompt
Item lengths
Exact prompt length depend on the tokenizer for the model, and can differ quite a bit with different tokenizers. The lengths included in the dataset should be seen as relative/approximate.
id source length
---------------------------------
1 alice 1000
2 sherlock 2000
3 lookingglass 3000
4 bluefairy 4000
5 alice 5000
6 greatexpectations 6000
7 littlewomen 7000
8 timemachine 8000
9 sherlock 9000
10 lookingglass 10000
11 tomsawyer 12000
12 littlewomen 14000
13 windwillows 16000
14 sherlock 20000
Sources
All the texts are from public domain books:
alice : https://ia801604.us.archive.org/6/items/alicesadventures19033gut/19033.txt
lookingglass : https://www.gutenberg.org/files/12/12-0.txt
bluefairy : https://www.gutenberg.org/files/503/503-0.txt
sherlock : https://www.gutenberg.org/files/1661/1661-0.txt
greatexpectations : https://www.gutenberg.org/files/1400/1400-0.txt
littlewomen : https://www.gutenberg.org/files/514/514-0.txt
timemachine : https://www.gutenberg.org/files/35/35-0.txt
tomsawyer : https://www.gutenberg.org/files/74/74-0.txt
windwillows : https://www.gutenberg.org/files/289/289-0.txt