What Technologists Get Wrong About Technology

--

Photo by Debagni Sarkhel on Unsplash

So much of our lives today mix and mingle with digital technologies. So much so that we often don’t even think about it. Ordering an Uber, Doordash, placing grocery orders, constant reminders from social media to texts and emails. SmartSpeakers, set top streaming boxes. With the rise of Artificial Intelligence and robotics, the technologists tell us, factory and office workers will be augmented with technology by 2030. The whole world of work will change. Except it won’t. At least, not in the way predicted. What are technologists missing?

Technocrats, Silicon Valley elites, futurists, have some awesome ideas. In fact, they add tremendous economic and often, not always, good social and cultural value. Odd thing is, this is also what they’re missing. Social systems, culture, human behaviour and an understanding of sociocultural systems. In other words, the hard science of computer science has nudged aside the soft sciences. This is creating problems.

Will humans work more alongside Artificial Intelligence (AI) in the future? Absolutely. Will it be good for humans? Most likely. Will this all happen by 2030 as predicted? Unlikely. Then there’s DAO’s, the decentralized organizations, whose proponents suggest will take down large corporations, completely changing the way we work. Maybe someday, but not for a long time. Why not?

Human behaviour, culture and sociology. Additionally, technologists and techno-utopians tend to avoid thinking about unintended consequences. Proponents of DAOs have great ideas, but they don’t understand the force of bureaucracies.

When it comes to AI working alongside humans, yes, it can be beneficial and could actually give many workers an opportunity to focus more on the work they like, rather than mundane reetitive stuff. But there are headwinds that those in the field of AI are going to face. The two most powerful of these are privacy and human rights, as in free agency.

When it comes to privacy, governments in the WEIRD countries (Western, Educated, Industrialized, Rich, Democratic), new privacy laws are coming into effect. Powerful ones. It started with the GDPR in the EU and then California. Canada’s Bill C-27 includes provisions for AI as well. Tech companies building AI tools will have to be more transparent and these laws are spilling over into human rights issues as well.

The human rights issues will focus initially on gender and racial bias issues. They will soon extend into the area of employee monitoring. The use of AI naturally means more surveillance by senior management. As unions start to catch up to the digital age, they’ll be making noise. Citizen groups already are. People in the WEIRD countries look at China as an example of how invasive and intrusive AI can be into personal and work life. These sociocultural systems reject that.

If employers are concerned over the Great Resignation today, they’ll have more worries unless they figure out how a workplace culture will accept working with AI. Those that don’t, will face disruption and may struggle to find employees. It will be a delicate balancing act. It is however, very possible. But ti requires more than change management approaches.

Technologies get accepted in workplaces and societies when the culture accepts it. Telephones took decades to be culturally accepted in the workplace and homes. The iPhone was more rapid because of Blackberry making it accepted in business. Apple, Jobs, understood social behaviours and that the iPhone was going to be accepted as a social signal. Apple, probably more than any other tech company, puts a lot of work into social behaviours and cultural understanding. This is also why they understand timing. It’s why they haven’t launched Apple Glasses like Google did. That failed in the consumer market, but has done well in industrial applications.

AI faces bigger cultural and human behavioural issues than smartphones. While humans have used technology since we started banging rocks together to make sharp edges, there is a trail just as long of failed technologies. And ones that fail at first, but come to life later. When a culture decides it is time.

A cultural system may also decide to use a technology for another purpose other than what the inventor intended. Twitter was supposed to be a tool for ambulance crews to communicate with ERs when transporting patients. Bell invented the telephone to share opera music with the world.

As most of todays most prolific digital technologies came out of Silicon Valley, they are technologies that arose from a Type 1 culture, that is, a “me” culture. Highly individualistic thinking. A Type 2 culture is more of a “we” culture, communal thinking. So technologists have always focused on the “user” rather than the human, which participates in a culture. They may talk of making the world better, but they don’t see the “world” they see individuals. This isn’t necessarily wrong. It’s just the culture that they came from. We see the world based on the culture we grow up in.

Typically, humans see tools as a purpose for survival, either directly or indirectly, such as improving cooking time (Insta-Pot) or reducing friction in organising and communicating (social media.) When it comes to AI in the workplace, management sees it as helping reduce headcount, improving productivity and hopefully, increasing profit margins. Workers may see it as a threat to their livelihood, but they may also struggle to figure out just how well and how deeply AI can actually help them. This is where bureaucracy comes into play.

The larger the organisation, the more bureaucratic it becomes. Policies and procedures take on a life of their own. Anyone who’s worked with an ERP system, even in smaller companies, know what a nightmare they can be and almost always are. ERPs are a cash cow for IBM, SAP and the big accounting/consultancy firms. It is likely that AI will find its place within ERP systems first and grow outward from there. If AI can make ERP systems work it would be a blessing!

So all of this takes time. Cultures, be they in civil society or the workplace, decide when a technology is accepted. This key factor is often misunderstood or not valued by technologists, pundits, futurists and analysts.

AI has some other issues, as do most cognitive digital technologies today. That’s for another post.

--

--

Giles Crouch | Digital Anthropologist
Giles Crouch | Digital Anthropologist

Written by Giles Crouch | Digital Anthropologist

Digital Anthropologist | I'm in WIRED, Forbes, National Geographic etc. | Speaker | Writer | Cymru

No responses yet