Final Version

#2
by hcoskun - opened

Hello,

Thanks a lot for amazing dataset. I wonder when you will you finalize the dataset. I see that it is increasing significantly every 2-3 days. I would love to download final version.

BIG data org

I estimate the scraping is less than 10% complete, the process should end up with near 100% of the site's content, so either when it is complete or I run out of funds for scraping infrastructure, whichever comes first.

BIG data org

Note the estimate is based on Flickr's tagline "home to tens of billions of photos" and average acquisition rates. Total content should be in the region of 20-30B+. At current scale there are ~150M new records daily which puts completion some time away, I'll look into increasing the number of instances per scraping node further to speed things up.

If you'd prefer a static version to work with I can create separate repos at certain milestones. The next update in this repo will be at 3B which should be available by Thursday/Friday.

Hello,
Thanks a lot for the work. It is quite valuable. I think right now i don't need static version. 20-30B quite large. btw let me know if i can help also. Thanks again for the work

BIG data org
edited 6 days ago

Apologies for the delay, there were some issues with processing from MongoDB to Parquet due to the size of the collection and resource limitations, I've now switched to using a custom processing pipeline rather than PySpark with spark-connector. This will make subsequent updates much faster, also parts have an exact amount and maintain the sort order so that for example part-00000 -> part-00999 will always be the same 1B samples, downloading updates should also be much faster.

The current version is now 3.654B and an update will be made later today.

Thanks a lot. This is really useful.

Sign up or log in to comment