Why Artificial Intelligence Needs Social Sciences to Evolve
Already Artificial Intelligence (AI), has slipped into our daily lives. If you speak to Siri, Google or Alexa at all, you’re tapping into a form of AI. Ever use a search engine? Yep, AI again. It’s being used to fight parking tickets (quite successfully too), find cancers and increasingly, eliminate the drudgery of repetitive work in knowledge sectors like law, insurance and finance. Even directing aid relief in crisis situations.
We also know now that racial and gender bias is also deeply embedded in many AI systems. The issue of ethics in the development of AI is becoming increasingly important and is being addressed. Another significant challenge is the weaponization of AI. To the point it’s become a West versus Autocracies issue, perhaps as deadly as nuclear weapons.
Any technology is neutral. They can be used for good and bad. Nuclear energy or nuclear weapons. Dynamite for construction. Dynamite for bombs. Bulldozers for building. Bulldozers for tearing down shanty towns. Social media to foster community. Social media to create political divides.
But when it comes to Artificial Intelligence, it’s largely been driven by hard science. Computer scientists, software engineers, neuroscientists, biologists. What’s been missing is the social sciences. Cultural anthropologists, sociologists, philosophers, ethicists. In some cases, ethicists have been brought in, but the track record with Tech Giants such as Google and Facebook has been sketchy at best. Perhaps the best known being Google’s firing of the brilliant computer scientists Timnit Gebru. Research through Montreal AI Ethics Institute has shown that technology companies tend to adopt ethics far less than academia.
As Professor Tal of McGill University points out, there is an analogy between AI and historical measuring instruments. AI systems analyze data and then score and rank individuals; a measurement function. But there is a lack of theorizing about the predictions AI applications like Machine Learning (ML) actually make. In medical situations, this could mean we rely more on the AI tool and less on the patient. Especially when mental health issues are brought in.
Then there’s the role of culture, which, for me as a cultural (digital) anthropologist, is pf particular interest. Culture is the knowledge humans use to navigate their world. Cultural aspects of a society are incredibly hard to program for and are not even considered. This is a hurdle towards understanding human consciousness even. Culture is so important to how we think about and operate in the communities and nations we live in. Cultural elements can be studied and to some degree, quantified enough for AI applications like Neural Networks, Machine Learning and Natural Language Processing. But culture is not included. If culture and racial bias were included, for example, in the use of AI in developing sentencing programs for offenders, such as it has been in the USA, we’d not have the racial bias we have already seen against black populations.
Amazon for a while used AI Machine Learning tools in screening and scoring job candidates. The aim was to predict who’d best fit in with the company. It’s a noble idea. Problem is, the overwhelming majority of data was around men and very few women. You can guess what happened. If they’d involved a sociologist with an understanding of organisational behaviours, such a bias may have been quickly found. The project was dropped.
In the past, AI developers did consult the social sciences. A recent study found that in the 1960’s (yeas, AI was around then), research papers often cited psychology. They compared that to today and found that has dropped by over 50%. Similar studies found that AI scientists have drifted far away from quoting or using insights from psychology, economics, sociology and political sciences.
One significant shift that has happened in recent years is that much of todays AI research is being done by companies rather than academia. Today it is dominated by Google, Microsoft, Amazon and Apple. Lest we put all the blame on industry and AI scientists, it’s important to note that many in the social sciences have also drifted away from being involved in AI research. Perhaps because it has become so complex?
Whatever the cause, for AI to truly benefit humanity at scale and be better, AI scientists and social sciences need to get more integrated. They need to find their way back to each other. If they don’t, AI may succeed in narrow applications, but it will fail in many others. That doesn’t help anyone.