• Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    10 months ago

    It prob came from a few of the fired from various ai places ai ethicists who actually worry about real world problems like the racism/bias from ai systems btw.

    The article itself also mentions ideas like this a lot btw. This: “Fan describes how reinforcement learning through human feedback (RLHF), which uses human feedback to condition the outputs of AI models, might come into play. “It’s not too different from asking GPT-4 ‘are you self-conscious’ and it gives you a sophisticated answer,”” is the same idea with extra steps.