When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.

Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in “cognitive surrender” to AI’s seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.

  • floofloof@lemmy.ca
    link
    fedilink
    arrow-up
    3
    ·
    1 month ago

    In general, “fluent, confident outputs [are treated] as epistemically authoritative, lowering the threshold for scrutiny and attenuating the meta-cognitive signals that would ordinarily route a response to deliberation,” they write.

    People have always conflated confidence with ability and knowledge. That’s why so many positions of power are occupied by confident bullshitters. It seems like that tendency transfers over to people’s interactions with LLMs.

    It would be interesting to experiment with an LLM trained to sound less confident and more tentative or self-deprecatory. Maybe the results would be different.