When it comes to large language model-powered tools, there are generally two broad categories of users. On one side are those who treat AI as a powerful but sometimes faulty service that needs careful human oversight and review to detect reasoning or factual flaws in responses. On the other side are those who routinely outsource their critical thinking to what they see as an all-knowing machine.
Recent research goes a long way to forming a new psychological framework for that second group, which regularly engages in “cognitive surrender” to AI’s seemingly authoritative answers. That research also provides some experimental examination of when and why people are willing to outsource their critical thinking to AI, and how factors like time pressure and external incentives can affect that decision.



People have always conflated confidence with ability and knowledge. That’s why so many positions of power are occupied by confident bullshitters. It seems like that tendency transfers over to people’s interactions with LLMs.
It would be interesting to experiment with an LLM trained to sound less confident and more tentative or self-deprecatory. Maybe the results would be different.
yeah! I think that’s an active area of research. from a quick search, here’s an example:
https://dl.acm.org/doi/full/10.1145/3613904.3642122