The point is that mainstream, for lack of a better word, user agents will discern themselves.
Perhaps it’s important to differentiate here between known user agents and general scrapers.
Googlebot, Bingbot and any honourable UA will have a specific user agent and have a robots page telling you why they’re fetching a page. They pretty much always have a way to reverse DNS verify that their user agent is coming from a genuine IP.
wrt generally scrapers, that’s just a general issue beyond AI. That’s just scrapers scraping.
If honourable user agents can honour a site owner’s content, then a ‘noml’ tag can instruct them to not use the page for machine learning.
This is as much about protecting content IP as drawing a line in the sand, IMO. Perhaps it also protects brands from misinformation that would be presented by an AI.
Yes, people will continue to steal content, this has happened since the start of the web, there is a distinction here about not using content to train AI models that’ll steal clicks from content creators.
As I say, honourable UAs will honour robots.txt and its protocol, this proposal is an extension of that.
Google have been proposing similar. Perhaps presumably for different reasons: https://services.google.com/fh/files/misc/public_comment_thought_starters_oct23.pdf
On small scales perhaps not, but as said this has always been the case with scraping.
robots.txt protocols has never been law but has been honoured so it’s worth hanging on to. It’s still the defintion of ‘good bots’ vs ‘bad bots’ on one level and that’s about as good as site owners have vs whack-a-mole with UA-IP variations.