Why AI might be commonly misunderstood
I think the problem stems from the word “Artificial” in Artificial Intelligence.
It can mean two different things.
First interpretation: Artificial as in man-made, like artificial limbs - they are man-made and they mimic the function of natural limbs. Applied to AI, it means AI is man-made, but does display intelligence. I align with this definition.
Second interpretation: Artificial as in inaccurate - like stocks being artificially inflated. A misrepresentation of underlying reality. Applied to AI, it means AI isn’t really ‘intelligent’ - it just mimics intelligence without truly possessing it.
I suspect the second interpretation is widespread because it’s comfortable. It lets us avoid uncomfortable questions. If AI isn’t intelligent, we don’t need to reexamine what intelligence actually is. Humans remain uniquely intelligent, and the established order stays intact.
But I subscribe to the first interpretation, and I think it forces us to confront something important. So let me first address the elephant in the room.
The elephant in the room is language. Human language specifically. Animals have language too - dolphins and elephants have elaborate versions of it. But no known species has the diversity in language that humans express. Language has been a uniquely human trait. No one else could follow it outside our species.
Until now. AI can understand language. If you ask AI to perform a task and give it the tools to do so, it can take an instruction in English and convert it into actions with remarkable accuracy.
Here’s my claim: Articulation of language is both a necessary and sufficient condition for exhibiting human-like intelligence. And AI does it really well. So it is intelligent, exactly like the human brain.
Imagine you call a friend sitting at a beach. You ask what she sees. She remarks “the waves are particularly strong today” - but that’s not ALL she sees. That’s only what’s remarkable about the beach that day. There are other things - people, birds, sand, sun - but she chose to remark about the waves. Her expression, of which language is a medium, is fundamentally about abstraction. The ability to abstract, to highlight what’s unique in a situation, to compress a vast visual field into a low-bandwidth signal - that’s at the core of what it means to be intelligent.
Language isn’t just speech - it’s the capacity to compress, abstract, and communicate complex ideas. Non-verbal humans who communicate through typing or other means still possess language; the medium differs but the capability is intact. And consider pre-verbal infants: they lack both sophisticated language and the full intelligence of adults. As language develops, so does intelligence. The correlation isn’t coincidental.
If you follow my previous post on nonbios, I’ve talked about strategic forgetting - something closely related to the ability to abstract away what’s not important and focus on what matters.
But the objections remain: If AI is really ‘intelligent’, why does it make simple mistakes like counting the wrong number of ‘r’s in “strawberry”? And if AI is truly intelligent, why hasn’t it taken over all white collar jobs yet?
These are fair questions, and they’re exactly what I’ll address in the next post.

