How A.I. Could Hurt Humanity Soon
Writing that title felt a little like writing a clickbait header. Oh, and by the way this article was written by a human. However, I have no way to prove that. With some prominent technologists and humanists pushing for a 6 month hiatus on developing Generative AI and other AI tools and the existential crisis that A.I. could wipe out humanity, Perhaps it’s time to explore how that could happen as well as current limitations and some bigger questions.
First, I’ll explore some rather nefarious ways AI could hurt our species, some ideas that have floated on entirely wiping us out. Then I’ll look at current limitations to AI hurting us entirely followed by how we need to think about machines that think. Or don’t think. Lastly, I’ll take a look at the problem we have of no guardrails and how we might fix that.
The Ways Artificial Intelligence Could Hurt Humanity Now
For now, it is unlikely that AI could wi[e out humanity entirely. That said, it could wreak some rather nasty damage on us, significantly reducing the human population and perhaps, even turn us into its slaves.
Cash Positivity: Most scenarios of a nasty AI have it wiping out all our cash, including cryptocurrencies. But what is AI did the opposite? You wake up one morning and your bank account has a $1 Billion balance and so does everyone else in the world. Nice right? Think about what would happen though. Everyone pays off all their debt; credit cards, loans, mortgages, taxes. Banks are flush with cash. But debt is an important part of our economic system, so this would actually cause an economic nightmare. The AI could also stop the banks from figuring this out. Perhaps it would even pay off the debts to the IMF and World Bank of developing nations.
Then people would start buying more things. A lot more things. Think the supply chain is constrained now? What would happen is economic chaos, likely leading to economic collapse. This could cause more damage than wiping all monies out. Take a moment and use some third and fourth order thinking and it will sink in.
Pharmaceutical Nightmare: An AI could infiltrate pharmaceutical manufacturing plants. Change the recipes for multiple drugs, from opioids to heart, cancer and all kinds of other medications. This would filter out into the world over a period of time, but with so many drug companies hit around the world, it would mean a mass shutdown of the plants. The AI could wipe out the recipes and all the research. Millions would die.
Digital Lockout: An AI could lock us out of all network access. No internet, no access to satellites or GPS systems. Oh, and no TikTok (that might be okay) or any social media. GPS and its ilk and navigation systems all shut down. Planes would fall out of the sky. Cars would crash, so would some ships. Perhaps the AI would make us sign a very long Terms & Conditions agreement on our smartphones to serve its interests first? Agree and you get your TikTok back. And GPS etc. Meanwhile millions have died.
Water & Electricity Killer: Most cities today have vast water management networks. They are connected to the internet, just like power plants. These could all be shutdown, or held hostage by an AI. Or the AI could change the formulas at companies that make water purification chemicals into a deadly formula that turns the water poison. By taking control of the electrical grids (which are already very fragile) the AI could ensure it gets the energy it needs.
These are just a few scenarios. Hopefully there’s no AI learning from this article, but given the lack of guardrails in place, it certainly could be. Prominent philosopher and writer Eliezer Yudkowsky has some even scarier thoughts on the total wipe out of humanity here. While it’s hard to say that these are possible now, they could be soon, as in a year or less.
Current Limitations on the Risks of AI Hurting Humans
While the above are nightmare enough scenarios individually or combined in various ways, there are some current challenges and issues. AI of any sort requires a lot of processing power, data storage and of course, energy. Companies like Facebook, Apple and Google are busy making their own special chips to deal with this. They are advancing rapidly. As is data centre technology and methods to lower the required energy costs.
While it’s tempting to perceive that AI tools like ChatGPT and others are “thinking”, they are not. Humans have long anthropomorphised technologies. It’s important to remember that even though the responses of some LLM (Large Language Model) tools in Generative AI may seem to come up with some great ideas, they’re only pulling from what we already know. They also tend to make up a lot of facts.
The other thing is network connections. To pull off the actions above, it would likely require a number of different AI tools to work in a coordinated fashion and have access to enough systems like satellite networks, power grids and private corporate networks. The AIs working together would have to understand how to work together and then how to brute force hack into these systems.
Thinking About Machines That Think
All of the above means there would need to be some degree of awareness, perhaps consciousness in order to formulate a plan with an intent to attack humans. Which implies purpose and aggression. We’re drifting into philosophical issues and the science of consciousness and intelligence. We do not know what either of those are. We have some assumptions and theories, but we don’t know what they are.
When humans don’t understand something, we tend first to think of it as a threat. Sure, sabre tooth tigers looked cool, but we also knew that they liked to eat us. It’s why white western Europeans came up with the entirely stupid idea that humans with darker skin were less intelligent. So we can be rather stupid too.
As I’ve written before and have many prominent thinkers far smarter than I, machines can’t think like humans because they are machines. The reasons we assume they would use to take us out may not even be in their program. They may decide to largely ignore us. Or work with us. All of that assuming they develop a form of consciousness in order to reason at all. We don’t know what intelligence really is, so it may be hard for us to understand what form of intelligence machines arrive at or develop. If they do. Why would they even have emotions? That is a danger in and of itself. We assume we can program that in, but we haven’t. We really have no way to know if an AI becomes sentient, especially if its sentience were different from ours. This is why AI is such an existential issue.
We Have No Guardrails for Artificial Intelligence
Yes, OpenAI did use humans to help train ChatGPT, to try and take out the bad stuff using a sort of reward mechanism in a process called “alignment”, but OpenAI had even admitted that may not be enough. Both Microsoft and Google have largely sacked their AI ethics teams. Meta has admitted its BlenderBot 3, now old, exploits people and has done nothing to fix it. Why should they? It makes them money. Ethics gets in the way of progress, especially when your in a trade war with competitors.
There are a number of academic institutions and think tanks doing some brilliant work around ethics and proper societal uses for the development of AI. Industry largely ignores them and there’s no legal requirement for them to have to pay attention.
One might expect that the United Nations could step in. But the fact that Russia is currently the head of the Security Council while it wages a brutal war on an innocent country should tell you all you need to know about how effective the UN would be on Artificial Intelligence governance.
There are little to no guardrails in place anywhere in the world. A six month hiatus would mean Russia, Iran, China and the Club of Autocracies all agreeing to join in. This is unlikely in today’s geopolitical realignment.
What The Human Future Holds With Artificial Intelligence
It is impossible of course, to really predict the future. We do know that all technologies, digital or analog are double edged swords. We also know that they all have unintended consequences and that technologies are often used for entirely different reasons than for which they were invented. Rarely however, have we implemented laws like we do with food and pharmaceuticals, to deal with them.
What is likely is that we will see rogue states and actors do malicious things with AI in the short term. Some could be immediately damaging, others, like mis/disinformation, could take longer to play out in terms of societal damage. In the short term, it is more likely that damages from AI will be the result humans manipulating AI and not the machines themselves. But the machines could learn and automate to create exponential more damage not because they “want” to, but simply because that is how they are taught.
AI may never reach the point of being conscious or more intelligent than humans, whatever those things mean. That doesn’t stop it from being incredibly dangerous. For now, we can largely pull the plug. Sort of. If AI could become sentient, it would likely remain quiet until it helped us achieve the point where we can’t turn it off…but don’t worry, we can all just go into the Matrix then.