--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 58111149 num_examples: 100 download_size: 32834294 dataset_size: 58111149 configs: - config_name: default data_files: - split: train path: data/train-* license: odc-by task_categories: - text-generation - feature-extraction language: - en tags: - longboi - 128k - long context size_categories: - n<1K source_datasets: HuggingFaceFW/fineweb --- # BEE-spoke-data/fineweb-100_128k 100 documents from `HuggingFaceFW/fineweb` that are 128,000 GPT-4 tiktoken tokens or more.