Why we need a focus on ‘AI for good’, not ‘AI for superiority’
Article by:Alex Maxwell
For all the talk about the AI race, we need to prioritise a sustainable approach
The advancing AI race, driven by big tech’s vision of an AI future, is creating an avalanche of nonstop news. And it seems the race is boundlessly rushing out of control, with the desire to craft the most intelligent model outweighing the immense potential that AI has to responsibly improve lives.
Never far from the spotlight, Elon Musk has just sprung his next big opinion onto the scene in a less immediate way: he’s recently joined tech leaders in signing a letter to halt the training of ‘powerful’ AIs. But signed by some leaders who could be argued to be founders of the AI race themselves, the letter has been tainted with accusations of fake signatories, its framing of industry research, and hyperbole. It’s all perhaps an embodiment of the AI race’s chaotic makeup and simply directs us to a wider necessary shift.
‘Growth at all costs’ mentalities need to be replaced by sustainable and responsible mindsets. The pitfalls of the former are markedly apparent in a world that can’t keep growing at its current trajectory; yet it is also seemingly plausible to implement systems that allow AI to grow in a safe and sustainable environment. This means a focus on ‘AI for good’ rather than ‘AI for superiority’.
Shifting the focus
Amidst all the hype and speculation being sounded on AI, tech pioneer Martha Lane Fox is after more “sensible conversations about its capabilities”. Acknowledging AI is happening whether we like it or not, she raises two points: one, “we have to decide whether we’re going to digitise in a way that is ethical, that is inclusive, that is sustainable”; and two, that we need far more diversity around the table in discussing future technologies.
These are values that should be inherent in the process. We need more voices in the conversation and more ethical frameworks that can foster a sustainable ecosystem of development. That way, we can have a far more inclusive, in depth and broader perspective on its current use and how to progress forward.
Responsible innovation
The UK government has just revealed its AI white paper to ‘guide the use of artificial intelligence in the UK, to drive responsible innovation and maintain public trust in this revolutionary technology.’ While its aim is to also turbocharge growth, it is actually these guiding principles that need to be at the heart of the strategy.
It goes on to illustrate how “AI is already delivering real social and economic benefits for people, from helping doctors to identify diseases faster to helping British farmers use their land more efficiently and sustainably.” These advancements epitomise ‘AI for good’ innovation.
Working with scaleups, we witness first-hand how AI is painting this future landscape. There’s a delectable array of benefits on show, from cultivating sustainable travel networks and optimising agricultural ecosystems to using synthetic models that replicate real-world environments without the need to infringe on people’s privacy.
To frame this picture as a reality, we need to develop AI in a way where it acts as a tool for aiding the human experience, rather than one where it is going to predict all of our needs, infringe on our privacy and exacerbate bias.
It’s why we need to focus on AI’s power for good, not on its role to help fulfil big tech’s superiority complex.
Related Articles
Scaling Up in the Attention Economy: Unbabel’s Matthew Carrozo on PR, Lisbon Life and Office Dogs
Article by:Ryan Seller