• GaMEChld@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    1
    ·
    19 days ago

    I don’t see this as a problem, rather, an opportunity to study information & disinformation propogation.

      • GaMEChld@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        18 days ago

        But not really a NEW problem. We knew LLM’s are trained on aggregate human data. We know aggregate human data is fundamentally flawed, inconsistent, unreliable, etc.

        Like was there a point at which people just decided, nah AI is just plain accurate? Or is that just what morons always thought despite the permanent warnings plastered everywhere saying THIS AI CAN MAKE MISTAKES, CHECK EVERYTHING!