• @altair222@beehaw.org
    link
    fedilink
    21 year ago

    and yet they released it. Anyone who still entertains this propaganda device while being aware of all of this is completely off my circle.

      • @altair222@beehaw.org
        link
        fedilink
        21 year ago

        I mean to call ChatGPT a potential propaganda device given its blatant inclination towards disinformation. With its copycats like that of Bing actively trying to gaslight its users too.

            • casey is remote
              link
              fedilink
              11 year ago

              @Gaywallet Mostly because I find it odd that some place an odd amount of weight in the responses #ChatGPT provides. Many were shocked when it said things that were wrong, but in reality, it’s just imitating human beings, who are often wrong anyways. If I say misinformation to you, you’re very likely to not be as surprised or perturbed as if #ChatGPT told you misinformation, for example.

              • Gaywallet (they/it)OP
                link
                fedilink
                11 year ago

                Why do you think that I perceive chatgpt in this way? I voiced an opinion about the biases that chatgpt and most AI have due to their large training sets which reflect systemic biases.

                  • Gaywallet (they/it)OP
                    link
                    fedilink
                    51 year ago

                    I think a focus on the source of the misinformation is misplaced

                    It’s the power of that source to generate misinfo at a faster speed and for close to no cost that’s a more pressing issue here.

                    I don’t think this is particularly likely to happen, but imagine I use a LLM to create legal documents to spin up non-profit companies for very little cost, I hire a single lawyer to just file these documents without looking at them and only review if they get rejected. I could create an entire network of fake reporting companies fairly easily. I can then have a LLM write up a bunch of fake news, post it to websites for these fake reporting companies, and embed an additional layer of reporting on top of the reporting to make it seem legit. Perhaps some of the reports are actually twitter bots, Instagram bots, etc. spinning up images with false info on it, and paying for bot farms to surface these posts enough for them to catch on and spread naturally on outrage or political content alone. This kind of reporting might seem above-board enough to actually make it to some reporting websites which in turn could cause it to show up in major media. This could end up with real people creating Wikipedia pages or updating existing information on the internet and sourcing these entirely manufactured stories. While there are some outlets out there who do their research and there are places which fact check or might question these sources, imagine I’m able to absolutely flood the internet with this. At what point of all total reporting/sharing/news/tweeting/youtubing/tiktoking/etc does this become something which our system can actually support investigating?

                    I also think it’s important to consider the human element - imagine I am an actor interested in spreading misinformation and I have access to a LLM. I can outsource the bulk of my writing to this LLM - I can simply tell it to write a piece about something I wish to spread, and then review it as a human and make minor tweaks to the phrasing, combine multiple responses, or otherwise use it as a fast synthesis engine. I now have more time to spread this misinformation online, meaning that I can reach more venues and create misinformation much quicker than I could previously. This is also a potential vector through which misinformation can be spread more quickly through the use of LLMs. In fact, I’m positive this vector is already being used by many.

                    However none of that touches on what I think is the most pressing issue of all, the use of AI outside it’s scope and a fundamental misunderstanding of inherent bias in systemic structure. I’ve seen cases where AI was used to determine when people should or shouldn’t receive governmental assistance. I’ve seen AI used to flag when people should be audited. I’ve seen AI used by police to determine who’s likely to commit a crime. Language models aren’t regularly used at policy scale, but language models also have deeply problematic biases. I think we need to be rethinking when AI is appropriate and the limitations of it and to consider the ethical implications during the very design of the model itself or we’re going to have far reaching consequences which will simply amplify existing systemic biases by reinforcing them in their application. Imagine that we trained a model on IRS audits and used it to determine whether someone deserved an audit. We’d end up with an even more racist system than we currently have. We need to stop the over-application of AI because we often have a fundamental misunderstanding of scope, reach, and the very systems we are training them on.

                • Pēteris Krišjānis
                  link
                  fedilink
                  11 year ago

                  @altair222 disinformation does not live or die on quality or quantity - it reflects on people’s instinctive fears. It does not really matter who generates it or how much of it is around. This doesn’t add anything new to tool kit.

                  • @altair222@beehaw.org
                    link
                    fedilink
                    11 year ago

                    I do understand what you mean. My instintive reaction to the proposition that it doesnt matter of the quantity of the disinformation is to disagree. But i do not have warranted argument against it nor the health to study it currently. Do pardon me for breaking the conversation here, I appreciate the good faith nature of it.

                  • Pēteris Krišjānis
                    link
                    fedilink
                    11 year ago

                    @altair222 thus ChatGPT as useless text generation tool does not expand disinformation that much. For better or worse. In my humble opinion, of course.