Ahead of his Press Play talk – “The AI Arms Race and Dangerous Technology Narratives” – we spoke to Tom Westgarth about his thoughts on the future of AI, and whether or not we should be scared of it.
Please note: This interview has been edited for increased clarity and readability. Tom’s views are still fully represented.
Although it seems a little obvious, it’s important to set context. What is AI?
It’s not a silly question to ask. AI is, or are, software systems that are designed to meet complex human goals. AI is known as a general-purpose technology, which means that something that is able to operate across multiple sectors, multiple use-cases – regardless of levels of supply and demand, and regardless of the service or good being provided.
There are different examples of AI, one is known as machine learning – probably the most well-known example – which refers to a series of statistical techniques which is able to effectively learn from the data without being instructed to. Other well-known examples include natural language processing – systems that allow you to understand, manipulate, and reproduce human language using the internet as data, as well as being able to actually do forms of image processing in order to classify images and videos.
How does AI fit in with modern warfare? And is it a reality we have to learn to live with?
Once again, this is another really foundational question. What we’re now seeing is, in the military space, AI moving into production. Previously it was a lot more theoretical, and now we’re seeing applied research turn into things that are actually used in the military arena. An example of this in the last year or so would be Israel, where the IDF has started to use swarm drones using AI systems. Other examples in the US – AI co-pilots are now being flown accompanying actual pilots. So we’re seeing this increasingly start to emerge within the military arena.
Is it a reality that we have to accept? It’s really tough to say. This is another frontier – I don’t want to be too deterministic about things, and say this is how it’s going to be, and part of my talk will try to say that it doesn’t have to be this way – but ultimately, people are looking to secure a strategic edge with AI, and there are two main domains for this. One of them is in the economic sphere, but the other is in the military sphere. And I think we’re going to get in the future, and we’re seeing it already, several defence-related startups that are starting to get a lot more funding – as for right or wrong, that’s where the market’s heading.
Should we be scared of AI?
I’m cautiously optimistic – let’s say.
By that, I think that the way in which opportunities can be opened up for people – for not just the next generation, but for tens of thousands of years – genuinely can be made far more fruitful as a result of our relationship with machines. If you look at the levels of research into Computational Biology, you’re seeing a lot more advances in being able to do things like drug discovery treatment, things known as longevity research – and part of that as a result of these big shifts in the arena of AI.
Does that mean that I’m also worried? Yes.
Do I think AI is an existential risk? Yes.
Not necessarily from the military perspective, although that’s part of it, but AI is a general-purpose technology at the end of the day. And that means that the effects that you’re going to have – as the history of science and technology shows: any time there’s a big disruption in the economy there’s going to be a lot of people that are disrupted, there’s going to be a lot of value created, but there’s also going to be people that suffer. So I think that we should be hopeful that things are going to turn out rosy, but recognise that there’s going to be a lot of harm done along the way – and it’s the role of government, society, and the private sector to try and make sure that those harms are mitigated as much as possible.
I think that we should be hopeful that things are going to turn out rosy, but recognise that there’s going to be a lot of harm done along the way
Would you say that there’s an Arms Race? A build-up of AI technology that’s being used for increasing military capabilities?
Yes, I would agree.
Part of my talk will consider how this is the overwhelming narrative, which is shifting the development of these technologies. Because, China wants to be the world superpower in multiple spheres. And the U.S. still has dollar hegemony, it still has overwhelming military capabilities – and China wants to upend both those things. The U.S. says ‘not on my watch’, and that fundamentally is going to bake in a narrative where you are going to want to rush to secure that strategic edge before your rivals. So there's an arms race in that military sense, but I think there’s also an arms race in the economic sense as well – because people are recognising that: how are we going to deal with stagnating productivity?, how are we going to deal with climate change? Politicians like to sweep, you know, they like to kill multiple birds with one stone. They want to be able to say ‘innovation and technology will deal with this issue’ – and AI offers that window. Whether it’s true or not is beside the point really because it’s part of a narrative. That means that you're going to get people wanting to build their own AI champions. You want to have your DeepMind on campus in the UK rather than in the U.S. You’re going to want to make sure that Huawei isn't buying up all of your really thriving startups.
So yeah, I would say that there is one.
Do you think that the interdependence of the East and the West – for example how China owns a lot of U.S. bonds, and the U.S. economy is quite reliant on Chinese manufacturing – might make this AI Arms Race ‘cold’?
I think it mitigates it. I don’t think it eliminates it. But yeah if the amount of dollars that China holds for example – dependence is maybe a bit far, but they’re sensitive to bond markets, or dollar markets, and that does matter. And that will be a tempering factor in this. But I don’t think that it eliminates the fundamentals, which are that both China and the U.S. want to secure this instrumental strategic edge. It’s not just the U.S. and China by the way, it’s countries all over the world, although the big drivers are obviously those two states.
What should international organisations such as the UN do to regulate AI?
I don’t know, is the honest answer! I don’t think anyone knows. I think that AI should be treated as a global public good, and by that what I mean is that I think its benefits should be shared evenly across the population in a way that doesn’t exclude certain groups from accessing it, and it doesn’t create rival intentions between different nation-states. The institutions we have in place are not set up to deal with that, so if you want to think about the UN – I don’t really think that they have the capabilities, the knowledge, or the negotiating experience in the technology space to deal with that. OECD are doing some more interesting stuff in the AI space, but once again I’m not necessarily sure if they’re the right home for it. I actually think we’ll probably need some new institutions similar to trading or the world trade organisation, but specifically designed to deal with things like AI and emerging platform technologies.
Regarding ideas such as the Non-Proliferation Treaty for nuclear weapons, and their efficiency. There’s a big push to ban, and a lot of people are worried. Let’s say, worst-case scenario, you have some autonomous drone that goes haywire or is hacked and causes a load of bloodshed and damage – people are understandably thinking, ‘let’s just ban that.’ But the problem is that these technologies are dual-use, it’s not like nuclear warheads which often just have one purpose. Because of the general-purpose nature of these technologies, if you start to ban one area, it’s difficult because these systems could be really beneficial for other parts of society. So blanket regulation is quite difficult, I’d say.
AI should be treated as a global public good... its benefits should be shared evenly across the population in a way that doesn’t exclude certain groups from accessing it
Could you explain to us the research that your team at Oxford Insights is conducting into AI?
We do a whole bunch of things. This isn’t actually the main area of focus for me. What we effectively do is advise governments and private sector organisations on how to implement emerging technologies better. Our kind-of landmark work on that is the indices work that we do; we made an AI readiness index which assesses how ready governments are to deploy AI within their public services. We also have a responsible AI index that compliments this to work out – are governments deploying it in a way that respects certain ethical principles. We also do work with public sector clients, for example I’m working on a project with DCMS on how to make the most of commercialising UK AI research; the UK has brilliant AI fundamental research, but this doesn’t always translate into the market – so we’re trying to work out how to improve that.
Are there any exciting developments in AI that particularly interest you?
For me, there’s two big things. First is in healthcare. I think that one of the big problems of the moment now is that we’ve got an ageing society, and the effect that that has on public services is very pronounced. Being able to improve the quality of life for these people is absolutely essential – not just for their own wellbeings, but to make sure that we can run public services. A lot of the developments are in areas such as drug discovery, for example AlphaFold: a project from DeepMind which is able to predict the way that proteins fold and that can mean that we can have much better-targeted treatment for serious diseases such as Alzheimer’s and Cancer. The other thing is in the area of text-based analysis. Lots of information is not accessible to lots of people because of language barriers; if you’re able to have AI systems that are able to reproduce information in more niche languages, then that can increase the access to resources for so many different people, and that opens up opportunities to people from all around the world – so that’s another really exciting thing.
Transcribed and edited by Alana Gaglio.
The views and opinions presented in this interview belong to Tom Westgarth — not Alana Gaglio, nor TEDxWarwick.
If you have any questions concerning the interview, and opinions expressed, do feel free to comment in the comments section, or email publications@tedxwarwick.com.
Comments