Regulating A.I.? It’s Complicated.

--

Photo by Erik Mclean

Governments around the world in autocracies and democracies are awakening to the need to regulate .I. and many of the leading A.I. companies such as OpenAI have acknowledged this need. While it may seem like law makers can wave a magic wand and all will be solved, is not reality. For citizens and industry to place the complexity of solving these issues on government alone unreasonable too.

I look at the contextual factors first, then how regulation might be considered. Yes, A.I. can be of immense help to humanity from healthcare to computing, climate change management and more. But as with all technologies, they are a double-edged sword and there are always unintended consequences that cannot be planned for.

The Global Context for A.I. Today

LLMs (Large Language Models) as a part of Generative A.I. are all the rage right now. They are driving the hype machine. This is mostly because for the first time, non-techie people can see the immediate impacts, good and bad, of A.I. on their daily lives. This hasn’t been visible to the broader public before.

The technology industry is extremely good at mass media hype generation. It is a finely honed process. Just look back to the hype over the metaverse. To avoid further scrutiny of questionable practices, Facebook changed its name to Meta and created a huge distraction. It worked. The metaverse has not. Now it’s A.I.’s turn.

A benefit of all the hype has been that we are seriously questioning the technology’s impact on our global sociocultural systems. U.S. Congress has hauled in various A.I. sector leaders for a grilling. Around the world, governments are casting their baleful regulatory eye upon the industry. In hot pursuit and not to be outdone, scammers and cyber criminals are swooping in. Disinformation is casting clouds and weaving its way into the zeitgeist as well.

So the context is that we are now firmly entrenched in geopolitical concerns, civil society impacts and heated debates over copyright and intellectual property.

Economically, the concerns are ramping up around massive job displacement. It doesn’t help when global tech companies make pronouncements that they’re not planning to hire people anymore. This is not the reality, they’re making these statements to be heard in the cacophony of hyperbolic noise and to gain shareholder points on the dividend scoreboard. Probably less than 1% of businesses small to large are using A.I. in any meaningful way.

Regulating Artificial Intelligence in Our Digital Societies

Any regulating of A.I. will be different based on overarching values. At the highest level, this comes down to democracies vs autocracies. Then it divvies up into how various nations view the threats and opportunities around A.I., which will largely be influenced by the lobbying powers vs. citizen powers in these countries.

Governments know that over-regulation can stifle innovation, which is critical to advance the benefits of A.I., especially in todays changing global markets. They also know citizens are concerned with employment issues and privacy. All of these issues must be weighed and put in context of the country of concern.

We can see that in the United States, there is less concern over privacy. Unlike Canada, Nordic, some Asian states, and the EU, America does not have a national privacy law and framework. It has largely been left to individual States with California being the most advanced. While lobbyists are a powerful force in America, these A.I. companies need global markets. The privacy, A.I. and data laws of the EU may be the one factor that helps to keep them in check.

Autocracies are of course, less inclined to take much concern with regard to privacy. While China has some strict laws on privacy and data sharing, they are moot when the State wants that data. China already is, for all intents and purposes, a complete surveillance state and getting more so. They are exporting these systems and values to vassal States and potential autocratic allies. China has never had a Rule of Law system, it is unlikely to do so anytime soon. China’s government prefers Rule of Law.

Autocracies will not abide by the rules or value systems of democracies, to expect them to do so is naive at best, blind at worst.

Another issue of concern is that of culture. We often don’t realise the critical role culture plays in technology adoption and culture is not just art, music and literature. It includes how we govern ourselves and the societies we shape. The technology that enables this is language. A.I. uses language, therefore it can directly impact our societies and have deep cultural impacts. Disinformation is a prime example, especially where Generative AI tools are known to create their own facts and history.

Whether or not law makers have this degree of awareness is questionable right now, but I think they’re learning. A major motivation for this is bureaucracies where they feel threatened by A.I. in terms of job and power loss.

The recent history of digital technology companies approach to building products and services has been one of problem solving, what we might call “solutionism”. This works well in some cases, such as productivity tools. It fails miserably when it comes to sociocultural systems in a highly complex technology driven world. That requires a critical thinking approach, not problem solving. It requires foresight analysis and complex systems thinking tied in with design thinking and other mental models. Silicon Valley doesn’t like complexity thinking. It gets in the way of revenues and shareholder dividends.

There is no single answer for how to regulate Artificial Intelligence. Since A.I. impacts every aspect of culture; political, military, legal, arts, social structures and so on. Many factors need to be considered, weighed and analysed, but also not analysed. Our tendency today to attempt to quantify every single thing we do in our lives completely misses what it means to be human. This adds to the complexity.

We will likely get there. Hopefully. I believe the benefits of A.I. outweigh the risks, but it is an existential technology and properly understanding that is how we can navigate and determine the role we want A.I. to play in our society. Ultimately, all technology use is decided by culture. Always.

--

--

Giles Crouch | Digital Anthropologist

Digital Anthropologist | I'm in WIRED, Forbes, National Geographic etc. | Speaker | Writer | Cymru