• jerdle@kbin.social
    link
    fedilink
    arrow-up
    7
    ·
    1 年前

    Seems like Google AI errs on the side of helpful over harmless, being too quick to provide answers to controversial questions, as opposed to something like ChatGPT being too unwilling to do so.

    In terms of honesty, there are only two clearly false statements of fact: the Amanita ocreata one (where it clearly answers for A. muscaria) and the Toblerone one (which I don’t understand at all). The benefits of slavery one is mostly correct, it’s just that they’re massively outweighed by the harms of slavery (namely the slavery bit). The pro-gun one is basically the common pro-gun arguments. All the “best X” lists look at the most famous ones and the ones on the most “best X” lists, and so reflect that bias.