• @tiresieas@dormi.zone
    link
    fedilink
    English
    14
    edit-2
    11 months ago

    Well first, AI won’t end the world… in its current state. There’s plenty of sci-fi covering that exact doomsday scenario: a highly advanced artificial general intelligence (AGI), for one reason or other, decides to eradicate or drastically alter humanity as we know it, usually because it sees humans/humanity as a blight or threat (see: Skynet from Terminator, Geth from Mass Effect), as a resource to be used (see: the machines from The Matrix, Reapers from Mass Effect), or as a twisted form of protection (Ultron from Marvel comics/MCU, AUTO from WALL-E). Will something like this happen? Hopefully not, but definitely not with the “AI” we have now.

    The impact of AI now is primarily social, the tip of the iceberg being used in academia (students using ChatGPT to write essays, professors using “AI Detectors” that also flag legitimate essays as being AI generated) and the issue of art generation. The biggest impacts that I think we’re going to see become a big issue soon is with deepfakes. We’ve seen some of this come up already, with the issues of certain female online personalities having AI-generated or deepfaked nudes produced, or the fad we had for a bit with AI-produced audio of US presidents hanging out, making tier lists and playing video games. The political theater, particularly in the US, already sees a lot of misleading/out-of-context sound bites and general misinformation, and voice synthesis tech can drastically affect this. Inserting a damning line in the middle of a platform speech, creating a completely fabricated “leaked phone call”… or potentially doing the opposite and gaslighting about what was really said or claiming that said conversation was actually faked. The proliferation of voice synthesis, whether or not it gets used will negatively impact the public’s political literacy.

    Going back to the arts, we are also seeing this issue come up (at least partially) with the recent WGA/SAG-AFTRA strikes and in art communities, where a language learning model or art generator is being used to “save money” by not using a human artist. Think of all the money a company can save by eliminating the need for writers, artists, or even the background extras and replacing them with generative models.

    We may even see this have greater impacts on a personal cultural level, such as AI who will be your friend or romantic companion.

    All that to say I don’t think AI, as it is now, is all bad, but right now the potential downsides we face with just the basic “AI” we have now vastly outweighs the benefits of a text bot that writes in a way that mostly looks like it should make sense or specific art pieces. There’s a lot of bad, and the good is pretty nebulous.