Given what we currently know about LLMs, these are stunningly unscientific positions for a leading company that builds AI language models. While questions of AI consciousness or qualia remain philosophically unfalsifiable, research suggests that Claude’s character emerges from a mechanism that does not require deep philosophical inquiry to explain.
You must log in or # to comment.
Jesus fuck it’s just fancy autocomplete. It’s markov chains all the way down. It doesn’t have a soul.
I’m so sick of “AI safety and research companies” talking sci-fi concepts and glancing over the actual harm LLMs do.



