We also want to be clear in our belief that the categorical condemnation of Artificial Intelligence has classist and ableist undertones, and that questions around the use of AI tie to questions around privilege."

  • Classism. Not all writers have the financial ability to hire humans to help at certain phases of their writing. For some writers, the decision to use AI is a practical, not an ideological, one. The financial ability to engage a human for feedback and review assumes a level of privilege that not all community members possess.
  • Ableism. Not all brains have same abilities and not all writers function at the same level of education or proficiency in the language in which they are writing. Some brains and ability levels require outside help or accommodations to achieve certain goals. The notion that all writers “should“ be able to perform certain functions independently or is a position that we disagree with wholeheartedly. There is a wealth of reasons why individuals can’t “see” the issues in their writing without help.
  • General Access Issues. All of these considerations exist within a larger system in which writers don’t always have equal access to resources along the chain. For example, underrepresented minorities are less likely to be offered traditional publishing contracts, which places some, by default, into the indie author space, which inequitably creates upfront cost burdens that authors who do not suffer from systemic discrimination may have to incur.

Presented without comment.

  • conciselyverbose@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    56
    ·
    2 months ago

    There is a wealth of reasons why individuals can’t “see” the issues in their writing without help.

    If you can’t see the issues in your own writing, you’re exactly who is most vulnerable to AI’s “syntactically valid but complete nonsense” output.

    • imadabouzu@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      I don’t entirely agree, though.

      That WAS the point of NaNoWriMo in the beginning. I went there because I wanted feedback, and feedback from people who cared (not offense to my friends, but they weren’t interested in my writing and that’s totes cool).

      I think it is a valid core desire to want constructive feedback on your work, and to acknowledge that you are not a complete perspective, even on yourself. Whether the AI can or does provide that is questionable, but the starting place, “I want /something/ accessible to be a rubber ducky” is valid.

      My main concern here is, obviously, it feels like NanoWriMo is taking the easy way out here for the $$$ and likely it’s silicon valley connections. Wouldn’t it be nice if NaNoWriMo said something like, “Whatever technology tools exist today or tomorrow, we stand for writer’s essential role in the process, and the unethical labor implications of indiscriminate, non consensus machine learning as the basis for any process.”