No, he’s challenging the assertion that it’s “trivially easy” to make AIs output their training data.
Older AIs have occasionally regurgitated bits of training data as a result of overfitting, which is a flaw in training that modern AI training techniques have made great strides in eliminating. It’s no longer a particularly common problem, and even if it were it only applies to those specific bits of training data that were overfit on, not on all of the training data in general.
How easy are we talking about here? Also, making the model public domain doesn’t mean making the output public domain. The output of an LLM should still abide by copyright laws, as they should be.
LOL no. The weights encode the training data and it’s trivially easy to make AI generators spit out bits of their training data.
paper?
No, training data.
No, he’s challenging the assertion that it’s “trivially easy” to make AIs output their training data.
Older AIs have occasionally regurgitated bits of training data as a result of overfitting, which is a flaw in training that modern AI training techniques have made great strides in eliminating. It’s no longer a particularly common problem, and even if it were it only applies to those specific bits of training data that were overfit on, not on all of the training data in general.
I thought he meant LLMs shot out bits of paper like some ticker-tape parade.
How easy are we talking about here? Also, making the model public domain doesn’t mean making the output public domain. The output of an LLM should still abide by copyright laws, as they should be.