As a journalist who covers AI, I hear from countless people who seem utterly convinced that ChatGPT, Claude, or some other chatbot has achieved “sentience.” Or “consciousness.” Or—my personal favorite—“a mind of its own.” The Turing test was aced a while back, yes, but unlike rote intelligence, these things are not so easily pinned down. Large language models will claim to think for themselves, even describe inner torments or profess undying loves, but such statements don’t imply interiority.
Could they ever? Many of the actual builders of AI don’t speak in these terms. They’re too busy chasing the performance benchmark known as “artificial general intelligence,” which is a purely functional
→ Continue reading at Wired - Science