The Unexpected Ironies of Artificial Intelligence

Image by Wilfried Pohnke from Pixabay

Most of the Artificial Intelligence (AI) that we humans interact with today, we rarely even know about. Most other AI applications we don’t even think about. Like when you do a search on Google or DuckDuckGo, or ask Siri or Alexa a question. It all happens in a hidden world. AI is everywhere now. AI is doing a lot of very cool and very good things and it’s getting better at doing them all the time. Our lives have and will continue to benefit from AI. Funny thing is, there’s rather a lot of irony around what we thought AI could do, what it actually does and how it might even play out in far less ominous ways than Hollywood has suggested.

The term Artificial Intelligence is an umbrella term. It includes Machine Learning, Natural Language Processing, Neural Networks and others. Sometimes these come together to solve for a problem or deliver a product. Sometimes they’re separate, but they’re part of AI as a whole.

The AI applications used today as Narrow AI, that is, they do one thing and do it exceptionally well. Narrow AI applications are getting better and better. Those in the AI field felt, at first, these various tools and capabilities would eventually all come together with the result being General AI.

General AI is the stuff of Science Fiction, Hollywood. Terminator or a best buddy who understands us and can have genuine conversations outside of an algorithmic formula. Essentially, AI that is conscious and is fine listening to all our woes and laughs at our bad jokes. General AI does not exist. It may never.

One of the biggest ironies around AI is that in the early days, we’re going back to the 1960’s, and even until recently, is that we assumed AI would solve simple problems. Turns out, AI is really not good at solving simple problems. AI is good however, at solving complex problems. AI it turns out, didn’t turn out the way we thought it would. An AI, for example, cannot integrate touch, hearing and vision like a two year-old child can.

Another irony is that AI researchers figured that with enough computer power, data and case studies and stitching all these tools together AI would produce general intelligence. That too it turns out, isn’t working either.

The problem is, while AI is very, very good at certain things, they have to be specific. AI does not, like us, live in a 3D world. Ai does not understand time and space relativity. Nor does it have a basic understanding something as basic as you can’t put a square block on a sphere. Basically, AI does not have situational awareness or contextual understanding like a two year-old does.

What we also discovered along the way is that we don’t even know what consciousness is. Nor have we really defined what intelligence is. So the next time an AI app says it can be your second brain, no it can’t. But another irony of AI is that it is making us look more deeply into what consciousness and intelligence are. That could be a boon for neurosciences and mental health as well as dementia research.

And it may be quite alright that we don’t get to General AI. We may face some scary truths. AI technologies do some great things and will play a vital role in helping fight complex issues like climate change, global economics and healthcare. But it is unlikely to be our second brain, let alone any kind of brain at all.

The irony is, AI may just do better for humanity than we thought it might, but in different ways than we thought it would.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store