A small nuanced correction to your comment about language models being pre-trained. That’s true for GPT (its the P), and all modern high performance language models since 2017, but it was actually not in fashion before that period, and may not be again at some point in the future.
We have almost no idea what the sequel looks like. Wait to see what risks it takes before judging.