Discussion about this post

User's avatar
Matthew T. Mason's avatar

Your post reminded me of an Oliver Brock paper "The Work Turing Test," with some additional ideas about organizing the test to address a breadth of tasks. (It isn't easy to find, but Oliver is promising to ArXiv it.)

One difference is the connection to language that you propose. Here is an issue I am curious about. I would say that our conscious processes have only a tenuous grasp of what our subconscious physical intelligence (the "inner robot"?) is doing. If you ask somebody how they do something, they are likely to expose a limited understanding. I am surprised, when somebody gives a simplistic answer, the inner robot doesn't smack its forehead in frustration. But, I guess the inner robot doesn't understand the answer. Anyway, since your proposal straddles language and physical intelligence, maybe it would shed some light on the gap between them.

Expand full comment
Jim Menegay's avatar

The argument you are making - that disembodied intelligent use of language is not 'true' intelligence - has also been made repeatedly by Yann LeCun, the former head of Meta's AI effort. As it happens, in two recent posts on my blog, I asked a couple of AI chatbots what they thought of this argument.

Both Kimi K2 and ChatGPT-5 gave a pretty good demonstration that they understood the issues. And I think that they both provided a nuanced answer that physical grounding of language may be nice when discussing block worlds or cats on mats, but that it is not strictly necessary when discussing ideas. I report verbatim the responses of these two chatbots to the question, but Claude, Gemini, DeepSeek, and ChatGPT 4.5 all also gave quite good responses to this same question, as well as some others.

Expand full comment
11 more comments...

No posts