• 0 Posts
  • 43 Comments
Joined 2 months ago
cake
Cake day: July 16th, 2024

help-circle


  • I don’t entirely agree, though.

    That WAS the point of NaNoWriMo in the beginning. I went there because I wanted feedback, and feedback from people who cared (not offense to my friends, but they weren’t interested in my writing and that’s totes cool).

    I think it is a valid core desire to want constructive feedback on your work, and to acknowledge that you are not a complete perspective, even on yourself. Whether the AI can or does provide that is questionable, but the starting place, “I want /something/ accessible to be a rubber ducky” is valid.

    My main concern here is, obviously, it feels like NanoWriMo is taking the easy way out here for the $$$ and likely it’s silicon valley connections. Wouldn’t it be nice if NaNoWriMo said something like, “Whatever technology tools exist today or tomorrow, we stand for writer’s essential role in the process, and the unethical labor implications of indiscriminate, non consensus machine learning as the basis for any process.”


  • NovelAI

    I’ll step up and say, I think this is fine, and I support your use. I get it. I think that there are valid use cases for AI where the unethical labor practices become unnecessary, and where ultimately the work still starts and ends with you.

    In a world, maybe not too far in the future, where copyright law is strengthened, where artist and writer consent is respected, and it becomes cheap and easy to use a smaller model trained on licensed data and your own inputs, I can definitely see how a contextual autocomplete that follows your style and makes suggestions is totally useful and ethical.

    But i understand people’s visceral reaction to the current world. I’d say, it’s ok to stay your course.



  • Oh man, anyone who runs on such existential maximalism has such infinite power to state things as if their conclusion has only one possible meaning.

    How about invoking Monkey Paw – what if every statement is true but just not in the way they think.

    1. A perfect memory which is infinitely copyable and scaleable is possible. And it’s called, all the things in nature in sum.
    2. In fact, we’re already there today, because it is, quite literally the sum of nature. The question for tomorrow is, “so like, what else is possible?”
    3. And it might not even have to try or do anything at all, especially if we don’t bother to save ourselves from ecological disaster.
    4. What we don’t know can literally be anything. That’s why it’s important not to project fantasy, but to conserve of the fragile beauty of what you have, regardless of whether things will “one day fall apart”. Death and Taxes mate.

    And yud can both one day technically right and whose interpretations today are dumb and worthy of mockery.


  • The issue isn’t even that AI is doing grading, really. There are worlds where using technology to assist in grading isn’t a loss for a student.

    The issue is that all of this is as an excuse not to invest in students at all and the turn here is purely a symptom of that. Because in a world where we invest in technology to assist in education, the first thing that happens is we recognize the completely unsexy and obvious things that also need to happen, like funding for maintenance of school buildings, basic supplies, balancing class sizes by hiring and redistricting, you know. The obvious shit.

    But those things don’t attract the attention of the debt metabolism, they’re too obvious and don’t include more leverage for short term futures. To believe there is a future for the next generation is risk inherent and ambiguous. You can only invest in that it if you actually care.



  • Yeah, this lines up with what I have heard, too. There is always talk of new models, but even the stuff in the pipeline not yet released isn’t that differentiable from the existing stuff.

    The best explanation of strawberry is that it isn’t any particular thing, it’s rather a marketing and project framing, both internal and external, that amounts to… cost optimizations, and hype driving. Shift the goal posts, tell two stories: one is if we just get affordable enough, genAI in a loop really can do everything (probably much more modest, when genAI gets cheap enough by several means, it’ll have several more modest and generally useful use cases, also won’t have to be so legally grey). The other is that we’re already there and one day you’ll wake up and your brain won’t be good enough to matter anymore, or something.

    Again, this is apparently the future of software releases. :/