Not at all - you’d have to train a model but then it could be run locally. You could even have something like SETI at Home, and run it on volunteers’ computers.
I suppose some social media charges for access and it might not be a good idea to scrape for something like this (or at least admit it).
You’re just making up big numbers and ignoring what I’m saying.
Either it’s done centrally, in which case it’s feasible if they have funding, or it’s done SETI-style and users share the load (and could have data limits, etc).
Not at all - you’d have to train a model but then it could be run locally. You could even have something like SETI at Home, and run it on volunteers’ computers.
I suppose some social media charges for access and it might not be a good idea to scrape for something like this (or at least admit it).
Yeah lemme just download many petabytes a month of data on my home Internet connection
In a world where you can stream 4k video, you think a few images are going to be a problem?
A few billion images yes
You’re just making up big numbers and ignoring what I’m saying.
Either it’s done centrally, in which case it’s feasible if they have funding, or it’s done SETI-style and users share the load (and could have data limits, etc).