One of Spez’s answers in the infamous Reddit AMA struck me
Two things happened at the same time: the LLM explosion put all Reddit data use at the forefront, and our continuing efforts to reign in costs…
I am beginning to think all they wanted to do was getting their share of the AI pie, since we know Reddit’s data is one of the major datasets for training conversetional models. But they are such a bunch of bumbling fools, as well as being chronically understaffed, the whole thing exploded in their face. At this stage their only chance if survival may well be to be bought out by OpenAI…
It makes a lot of sense, but with the way organizations such as Internet Archive are saving webpages from Reddit, wouldn’t it be feasible to train your models off of those sites to circumvent any API charges?