Business and Trade Committee — Oral Evidence (HC 1794)

14 Apr 2026
Chair167 words

Welcome to the first panel of the Business and Trade Committee investigating AI, business and the future workforce. The frontier AI race is essentially the global competition to build and control the most advanced AI systems, but it is not just about those models. We know that we are looking across compute, data, talent and investment to make those systems possible. The two dominant players are probably the United States and China, with the US widely seen as the leader, driven by major tech companies and deep pools of investment. The United Kingdom, while it does not have the same scale of resources, has real strengths in research, AI safety and applied artificial intelligence. The Government have recently announced growth zones and targeted support for AI companies through the British Business Bank. As part of their AI opportunities action plan, the Government have established the sovereign AI unit to target AI deployment and its development. Could I welcome the panel and ask them to briefly introduce themselves?

C
Professor Lawrence22 words

My name is Neil Lawrence. I am professor of machine learning at the University of Cambridge and chief scientist of Trent AI.

PL
Dame Wendy Hall151 words

I am Dame Wendy Hall, regius professor of computer science at Southampton. Forgive me, I got back from America last night and I am very jet lagged, but I think I am Wendy Hall. I am a pure mathematician and computer scientist by background, but I got pulled into AI policy. I have been doing AI for a long time—I wrote my first code 40 years ago—but under Theresa May I was asked to do what became known as the Hall-Pesenti review of AI for the UK, which effectively led to the setting up of the national strategy for AI and the AI Council, which we were both on. More recently, I have been on the UN AI Advisory Body, which I can talk to you about. That has been a lot of hard work, but fascinating, and is one of the only games in town for global governance of AI.

DW
Chair52 words

Perhaps you can set the scene for us today. What does the global frontier AI race look like at the moment and how do you see it unfolding? Do you have you any thoughts on how you might characterise it? Where do you see the UK in that race, if at all?

C
Professor Lawrence545 words

It is perhaps symptomatic and problematic of the situation we face that it is being characterised as a race. If I think of a race, I am immediately thinking of Usain Bolt doing extraordinary things in the 100 metres. That implies that there is only one winner and a number of losers. That plays to a narrative that is extremely dangerous for the UK, because it is not going to be plucky Brits participating and snatching a bronze. It is going to be us targeting areas of innovation that are not in the interests of our citizens, that are not solving their problems. They are about issues that are important, but are not in the in-tray of what we need to do for the UK public. The dominance of that narrative is due to a wider problem we face in this space that, if you will forgive a bit of levity, I characterise as “Life of Brian” syndrome. For those who have not seen the film, the opening of that film is characterised by a difficult situation in Judea, where people are extremely nervous about what is going on. As a result, they respond to any pronouncement of an idea as if it is the way forward. They are extremely susceptible to false prophets and confident, strong ideas that seem to give them a way out. Of course, the comedy of the film is all about a regular person being placed as a prophet, who is constantly trying not to be that. That is how I could characterise the situation since the launch of ChatGPT. I do not want to pick on a particular Government or individual. This is systematic and we see it also in businesses, where people are gravitating around simplistic ideas, which we know in the long term are unlikely to provide the solutions for our businesses and for our small and medium enterprises, because it gives them a sense of tangibility and security in what is a very uncertain space. Of course, the irony is that, in such uncertain times, we need to be listening to the more nuanced perspectives and the people who are saying, “It might pan out like this, or it might pan out like that.” Most importantly and most underserved, we need to be listening to our citizens and businesses—not the big tech leads and their ideas of what the future looks like, but the problems that our citizens face that have been problems certainly ever since the first season of “Yes Minister”, because they all exist in that other comedy, but remain problems today. There are problems in health, social care, security and education that we would love our small and medium enterprises to be part of solving. I am afraid that, despite the extraordinary power of this technology, we are not seeing the signs of that yet. I largely say that it is because, instead of thinking about that and listening to our citizens, we are distracted by notions of global AI races. I am not saying that there is nothing geopolitical going on, but they are a construction associated with aspects of the world that are difficult and problematic, and need resolving, but they are not going to solve the problems of our businesses.

PL
Chair10 words

That is really helpful. Dame Wendy, what are your thoughts?

C
Dame Wendy Hall816 words

I echo everything that Neil is saying there. I will add some other comments. I am very aware of the geopolitics. I have written about the geopolitics of the internet and am working on the geopolitics of AI, and I do a lot in China. It is the press; as Neil said, we do not want to see it as a race, but that is how people like to portray it. We are very torn in that we cannot deal with both sides of that equally. We are very constrained, in universities as well as businesses, to work with the US. China is doing some amazing work in AI and in fact, at the moment, is acting as the good guy because the US is totally against any regulation. We talk about global governance. It is all MAGA—“We’re going to win at all costs.” There is the hype that comes out of the companies, because it has, since ChatGPT, been set up very aggressively as not just a race between countries, but a race between companies. That puts us in a scenario that is so dangerous, because they have to make money and they are not. There is no profit being made by these big companies. The ones that are going to win are the ones with the deep pockets that can sustain themselves. OpenAI is in deep trouble. Anthropic is going to have to do huge amounts to keep at the front of everything, because it does not have the big pockets of money that Google, Microsoft and the likes of AWS have. The hype and marketing that comes out as a result is so misleading. I was on holiday last week, but I still keep up with the news. I was on a cruise ship in the middle of the Atlantic. There was such a lot of news about AI last week, and stuff came out of Anthropic. We have heard this before; they cry wolf all the time. Anthropic is saying that its new platform, Mythos, as in mythology—funny that—is too dangerous to release. We heard that from OpenAI in February 2019—that its GPT product was too dangerous to release. There is some element of, “We are being careful and checking things,” but there is also a huge element of hype: “Be ready, because you’re going to have to buy this product in order to stay in the game.” I could go on about that forever. What is our role in all this? When we wrote the Hall-Pesenti report, it was very clear what our role was, and it still is the same role, actually. In your introduction, I do not think you mentioned education and research where we are incredibly strong. We have an amazing legacy. Neil is at Cambridge. There is Cambridge, Oxford, my university Southampton, Edinburgh and Manchester. As I said, I wrote my first AI code 40 years ago. We were teaching AI. We have been teaching it and researching in it for that long. In the UK we have some of the top research groups in the world. You will hear the companies say, “Everything happens in industry.” Actually, everything they are working on started in universities and started here. Geoff Hinton, who got the Nobel prize for physics, started in Edinburgh. Demis is still here, bless him, at Google. I am so proud of what they do. He had to sell to Google, but he still stays in the UK, and we laud him as a son of the UK. We have this amazing legacy. We need to continue to train the next generation. I will say a bit more about China and then I will shut up. I always say to people that AI professors do not grow on trees. It takes a long time to grow an AI professor and we are very good at it in the UK. You need those AI professors to train the next generation of AI engineers and researchers. They are the people who the Saudis and whoever else is trying to grow an industry will be after. We must protect that legacy and the legacy of research. Then it is the applications, and we are very good at this. I will say one more thing about China. In the UK, it is getting increasingly difficult for us to collaborate with China on research. I feel like my academic freedom is being limited. Yes, we need to protect our secrets. We need to make sure that what we are producing is not being used in a way we do not want it to be used. It is harder and harder to collaborate with the US because it has reduced funding, and it is even harder to collaborate with China. We are going to have to manage. I will say one other thing. We need to talk today about sovereign AI.

DW
Chair6 words

Thank you. That is very helpful.

C
John CooperConservative and Unionist PartyDumfries and Galloway28 words

You mentioned there that you thought the Chinese were actually acting as the good guys in this. I wondered whether you might explain what you meant by that.

Dame Wendy Hall138 words

Thank you for asking. The US is saying, “No regulation. It’s all here. We’re going to win and you have to buy our stuff.” We have to worry about what we do with our secure data, because we have to be careful about hosting things on US servers. Clearly, we have to do that with China, but China is totally backing what the UN wants to do. I can talk more about that if you want me to. The Chinese are also making an awful lot of their stuff open source. That does not mean they are not using it for military and other purposes. Of course they are, but a lot of their models are open source. They are going open source and going, “We want to be part of the discussions about governance of AI globally.”

DW
Chair46 words

We do not think we are in a race, but it sounds like we have some excellent strengths to build on. I am going to hand over to Justin, who wants to pick up on some of these points around what the UK’s strengths might be.

C

It is clear that the race analogy is not a very helpful one. We will try not to get into crude analogies, but get some facts about where you think our strengths are. What is it that we can build on in this country? How can Government help deliver on that?

Professor Lawrence735 words

If we want an analogy—apologies—I prefer a football analogy. We are being presented with what we might see as a football tactic of balls being swung in from the right and left, so there are only two ways of doing things. We need to play creatively through the middle and that is to play with our skill base. It is harder to do, actually. It requires what we lead in and what Europe leads in. A major failing of UK policy is to not be looking more towards European partners and what they are doing in response to these challenges. Across Europe, you see a diversity of liberal democracies that have different circumstances and are responding in different ways according to their economic set-ups. If you take Germany, with the cyber valley and its large investment also in the Munich ecosystem, it is trying to focus on how it takes its brownfield industry and sustains it through to an economic future where it is still competitive, through Government intervention. If you look at Finland, where a lot of the innovation is greenfield, you see a very different strategy, which is focused on its lead in tech and the fact that, after Nokia collapsed a bit, it has enormous capabilities at the tech forefront and is trying to see more start-ups. The UK sits somewhere in the middle of this, but there is a commonality across all of Europe. We lead on human capital. If you look at the United States, it imports extraordinary amounts of human capital from China, Europe and India. We have approximately the same inflow and outflow of human capital. Countries such as Italy are in a structural deficit in human capital, so they are educating a lot of people who are leaving. There is a really major question. We cannot underestimate that, despite what Wendy rightly said about the hype we are hearing from individual companies, in the long term, this is one of the most transformative technologies we are ever likely to see. If we want the steerage of this technology to go in a way that capitalises on our human capital and datasets, we need to be steered by driving the technology in directions that support the evolution of that human capital. Capability in human capital is simultaneously a strength and a weakness. The human capital we have constructed through education is clearly not going to be exactly the right form of education that we need in 20 years’ time. That is going to change. That means supporting workers in the workforce today to assimilate and adopt these technologies. I am talking about small and medium enterprises, and specifically about the types of companies that are in the lower-productivity sector, so those that have already been left behind by previous digital technologies and felt unable to invest. I am talking about companies that are struggling with their management. We see statistics that show that companies that are well managed will adopt the technology more quickly, so improving the education of our managers in how to build and deploy this technology is important. The big danger with it is that it is always all on or all off. The typical response—and I do a lot of work with businesses—is for people to be all in or all out. Just like any nuanced technology, you need a calibrated understanding. We have a massive advantage in all those areas. We have what I refer to, and Wendy has referred to it as well, as campus UK. In terms of the concentration of educational and research expertise that you can get to by train within the United Kingdom, we are second to almost no other place in the world. The United States is very distributed in this way. We do not put enough investment in places such as Southampton or Glasgow. Because we are distracted by AI races, we are not encouraging our universities to engage better in their local communities and in the types of businesses that we need to be assimilating these technologies. That is not what they are rewarded for. They are rewarded for Nobel prizes and headlines on the cover of Nature. I would like to see a position where we are rewarding our academics just as much for being on the cover of the local paper, for the right reasons, as for being on the cover of Nature.

PL

Do you have anything to add to that, Dame Wendy? Where are the UK’s comparative strengths?

Dame Wendy Hall240 words

I have just written it down, because I reminded myself what was in the Hall-Pesenti report—I am not just saying it because I was one of the co-authors, I do not think; we were talking about this earlier. You could look at those things. We talked about data, skills and adoption, which is everything Neil said about helping small to medium enterprises adopt AI, and we talked about leadership. We can do all those four things really well. The other thing that has been happening very recently, which I like, is the AI assurance industry. This is going to get very big. One of our strengths came out of the Bletchley Park summit that Rishi Sunak started. He did not get everything right, by the way, but that came out of it. What is now called the AI Security Institute came out of it. I would say that we are the best AI safety and security institute in the world. Again, this is something that the US does not want, security institutes, but the security institute has supported the setting up of what is called a network of AI measurement, evaluation and assurance. We could lead in the AI assurance industry. This is testing and evaluating products before they are released. At the moment, it is a wild west. The companies can release anything to the general public. It is like we are running an experiment that we cannot undo.

DW

What are the risks of that?

Dame Wendy Hall77 words

Things could be dangerous to personal, company or national safety, and there is no testing of it. The National Physical Laboratory—NPL—has also been funded through, I think, AISI, or through DSIT anyway, to set up a centre for AI measurement. We could become one of the real leaders in that in the UK. Remember, AI is not just about technology. It is about people using that technology, so there is a sociotechnical aspect to it as well.

DW
Alison GriffithsConservative and Unionist PartyBognor Regis and Littlehampton52 words

Dame Wendy, I wanted to come in on what you were just saying. What is good or bad about AI is quite a subjective measure in many ways, but you mentioned metrics. How do you think we should look at defining what is allowed out into the world and what is not?

Dame Wendy Hall196 words

There is no quick answer to that, because we do not know yet. This is an evolving science. I am beginning to talk about things, like when I talk publicly about a science of AI and learning how to do this. I am one of the people who think that we have time. If you listen to some of the stuff that comes out of silicon valley, it is all going to happen this year and we are all going to hell in a handcart, whichever way we think about it. They seem to want it both ways. They want us to buy their products and to understand that there is an existential threat down the line. I am one of the people who believe we still have time, because it is not moving as quickly as they tell us, even on jobs, which I am sure we will come to. We have time to develop the metrics. We have already started with the two investments I mentioned. We have AISI, the work at the NPL and others looking at AI assurance. We are good at this stuff. We could really build on that, I think.

DW
Mr Reynolds79 words

Justin was talking earlier and asked a question about the UK’s AI strengths. I want to look at the other side of the coin for the moment. A 2025 Department for Science, Innovation and Technology sector study found that one in three UK AI leaders were actively considering relocating their bases overseas, mostly in their entirety to the US. Where do you think are the most significant gaps or weaknesses that limit the UK in terms of our competitiveness?

MR
Dame Wendy Hall14 words

We are two who are not. You said one third. We are staying here.

DW
Professor Lawrence488 words

I am a dual national citizen, so I could have spent my entire career in the US. You have to be very careful about what they are defining as an AI leader. At the moment, we have what I would characterise as a growth strategy around AI. I am all in favour of that, but that is being dominated by what I would say is supply-side simulation. One unfortunate thing about the UK, to a greater extent than anywhere else, apart from perhaps the United States, is that you see that the advice in DSIT is dominated by people with supply-side interests. We are talking about people who can provide infrastructure, so things such as the US-UK surplus deal. I believe that you have had a recent visit to India. It is an extraordinary example. I was on Doordarshan television for an hour and a half on the evening of the AI summit where Dame Wendy spoke beautifully. They asked me what the west can teach India about AI. I said that it is the other way around and that we can learn from India. Whatever the challenges of the current Government and Modi’s policies in certain areas, you can look at what he is doing in terms of these technologies. It was the first time at that summit that we sat in the audience and you saw the discomfort in the AI leaders as they were forced to talk about Indian people, Indian farmers and what was going on in India. As I think he refers to it, AI that works for India works for the world. What does he mean by that? He means that India has the cross-section of citizen problems that reflects the cross-sections of problems across the world. There is some propaganda and marketing behind that, but it is an utterly different situation from what we are facing in the UK, where we are constantly hearing about how AI that works for the UK is AI that works for Microsoft, Amazon, OpenAI, Google and these other big tech companies. Our announcements are a series of announcements of deals with these companies aiming to deploy in the apparent interests of our citizens. Yet, when you look at the track record of what it means to centrally deploy, what do you get to? It is another problem that this Committee has been very interested in, the Horizon scandal, where you centrally deploy a technology on people without engaging them. In terms of our weaknesses, it is a weakness to be looking outside and constantly across the Atlantic at organisations that are brilliant—we absolutely want them here—and overemphasising their contribution and what they can do for the economy. It is not going to come from them. It is going to come from our businesses adopting, assimilating and innovating. At the moment, that is an enormous weakness—a lack of confidence in our own people, businesses and universities.

PL
Mr Reynolds43 words

On that, confidence is one piece, but there is also a funding issue in terms of the large-scale funding that is required. We also have talent pool issues. We do not necessarily have the people with the right skills in the right places.

MR
Professor Lawrence9 words

So much of this is addressable without massive funding.

PL
Dame Wendy Hall2 words

Exactly, yes.

DW
Professor Lawrence233 words

It is addressable with a nuanced understanding of our ecosystem. We need to pause and understand what to do to encourage peer-to-peer support networks. We are in a situation where, for the first time ever, we are working with local authorities in Cambridge. We are working with our local councils. As soon as we met them, they said, “Our people have formed AI clubs. They want to know how to deploy this technology.” It is not everyone, but there are people in the local authorities who want to do that. We work with South Cambridgeshire to build on that interest and support it in building planning solutions that are deployed by the planning officers themselves, in collaboration with the University of Cambridge and the University of Liverpool. What do we hear four months later? There were big announcements that Google is going to solve planning across the country. We are doing that for, what, £20,000? It is just about reaching out into these challenges and listening to their problems. Liverpool, I hope, will build a business on the back of that, because it has learned an enormous amount about how you can process planning allocations by working at the forefront with South Cambridgeshire. I hope that Liverpool will get a business out of it. I got an email on the day of that announcement saying, “What does this Google thing mean for us?”

PL
Alison GriffithsConservative and Unionist PartyBognor Regis and Littlehampton6 words

This is a very short question.

Professor Lawrence6 words

Sorry, my answers are too long.

PL
Alison GriffithsConservative and Unionist PartyBognor Regis and Littlehampton27 words

It is to invite Professor Lawrence to come to my constituency and meet my local council. I would love to put your ideas in front of it.

Professor Lawrence30 words

We have open schemes, which we have just closed, unfortunately, for inviting councils in and working with us in this way. I will definitely give you the details on that.

PL

I have far too many questions and I know we have such a little amount of time. We have talked a bit about the uncertainty around US tech companies. There is a political imperative around understanding what that actually means. I would be really keen to get your perspectives on this. We have seen owners of AI platforms show a willingness to use their stack to influence foreign Governments. We have definitely seen that. It is a bit of a blunt question, but have we effectively outsourced our AI model development to private billionaires with zero loyalty to the British state and consumer?

Dame Wendy Hall1 words

Yes.

DW
Professor Lawrence79 words

We have to be a bit careful. These businesses are not evil, but they are incentivised in different ways. When we talk about AI alignment, are we aligning the models with the interests of our citizens and values? You could also talk about corporate alignment. Corporations themselves are entities and these corporations are clearly not aligned with the interests of our citizens, which is why there is such a gap between what they ask for and what is delivered.

PL
Dame Wendy Hall79 words

Can I say a bit more? We need to think more about what we mean by sovereign AI. I said yes, because that is where it is and that is where these big companies would like it to be. That leaves us with all sorts of problems, so the things Neil has been talking about, but also in terms of training and skills. It is very hard. We need to help people; we need—what do you call them, SLMs?

DW
Professor Lawrence11 words

Yes, small language models, which you saw in India a lot.

PL
Dame Wendy Hall98 words

Oh, the enthusiasm in India from the young people—there were 250,000 people at the Indian summit. I would say that half of those were young people, who were so enthusiastic about how they could help people in India with AI. We have no way of enabling people to really play. You can do it at Cambridge, because you build it yourself. There are these small language models, which India and others are doing, to enable small organisations, such as local councils, to use the technology themselves without having to buy into a big platform, and for training students.

DW
Professor Lawrence194 words

Can I riff on what you are saying a bit, Wendy? The other thing that is happening at the moment is about orchestration, and that is a total game changer. Many of the assumptions that people were making about where the innovation was going to occur was based on the fact you had to train these models, but if you look at all the recent headlines, OpenClaw or the recent innovations from Claude Code, all these technologies are so-called agent orchestration. An 11-year-old can do this. Genuinely, everyone can participate. It is a very easy thing to do. We are creating such models within Cambridge. We are using them for training and supporting our scientists. We would like to get that sort of work out into councils. It is very possible because, as Dame Wendy was saying earlier, open source is so important in innovation. The orchestration systems are much more open source and accessible. Of course, they come with risks as well. They are very powerful. I would argue that what we are going to see over the next decade is going to come more from orchestration than from the individual models themselves.

PL
Dame Wendy Hall40 words

That is why we need an assurance industry. Because everybody can do it so easily, there are huge dangers there. People need to know that what they are dealing with is safe. That is why we need an assurance industry.

DW
Alison GriffithsConservative and Unionist PartyBognor Regis and Littlehampton78 words

I have not had the chance to ask about the growth prize and what that looks like for the UK, but we have talked about it to some degree. I also wanted to ask Dame Wendy, to your point about China, to talk briefly about the balance of risk as well. On the growth prize, we have talked in quite broad terms from many different angles here, but that is one thing that I want to look at.

Dame Wendy Hall20 words

Do you mean that there is a risk in us not using the Chinese models? I think that there is.

DW
Alison GriffithsConservative and Unionist PartyBognor Regis and Littlehampton43 words

Yes, exactly. You talked about the fact that you felt unable to pursue your relationship with China as much as you wanted to. I suppose the question that I have around UK growth is the balance of risk and how we manage it.

Dame Wendy Hall132 words

I would love to be able to let my students in Southampton use a Chinese open model, because the Chinese are very innovative. They are doing things very efficiently and innovating in terms of the way they build these things. We cannot choose to use them. We would not be allowed to. It would not get through the trusted research network. I sort of understand the reasons why—I do understand the reasons why—but it is very limiting. I am trying to think of an analogy where you would say to the world, “You can only buy drugs that are produced by the American pharmaceutical industry. You cannot buy drugs that are produced by the Chinese pharmaceutical industry, even if they are going to cure cancer.” That is the sort of analogy here.

DW
John CooperConservative and Unionist PartyDumfries and Galloway56 words

Professor, you talked about campus UK and our attractiveness for AI companies. We are host to something like 6,000 companies that are involved in AI. I wonder how attractive we are on the global scale. What factors influence that attractiveness? Particularly, is the Government’s AI action plan having a positive impact, or could that be improved?

Professor Lawrence353 words

My feeling is that it was welcome, but it was a minimal plan. If you looked at the amount of compute that was being talked about, it was only about three times the amount of compute you would expect to get just with processor improvements over the years. I worry that, again, it is the supply side. The notion of an AI company is quite misleading in some sense. Wendy is talking about assurance. She is absolutely right, but a lot of the lessons of the innovation are going to be not worked out in an AI company that is separated from the adoption layer. They are going to be worked out by people who are coming from a domain, say a nurse who builds an AI company that supports nurses in data entry. That is very localised work. The challenge we face with digital technologies historically is that they are difficult to create and then imposed on us in a one-size-fits-all manner, which gives an enormous amount of power to the tech companies that control the software engineers. That model is dead. Now you can create software trivially. Any of us in this room can talk to a machine and have it do what we want. That is a revolution unlike any we have ever seen. Our notions about where the innovation will occur—previously it has been centralised in a company that pays a large amount of money to a group of software engineers—are going to absolutely shift over the next 10 years. We are going to see the potential that these innovations are deployed much closer to the coalface. We are competitive for AI companies. They come and locate here. We have a bunch of people who we graduate. With all due respect, I feel that it is the wrong question. It is because of that type of distraction that we miss what is actually going on. I believe that it is just as we saw with the internet. We all thought that the internet was one thing. A 17-year-old Mark Zuckerberg decides it is something else. Who is right, unfortunately?

PL
John CooperConservative and Unionist PartyDumfries and Galloway56 words

I wonder whether that is the point, Dame Wendy. Are we asking the wrong question here? I often think of television. You do not need to understand Rediffusion to change the channel on the TV. Are we looking at this through the wrong model? Are we trying to centralise control when we should be diffusing control?

Dame Wendy Hall117 words

Yes. It has to come from grassroots. You talked about what you are doing in Cambridgeshire, Neil. We are doing the same sort of thing in Southampton and Hampshire, giving people the power to do it themselves. That is all about education, training and setting up. It is becoming quite clear that, in every sector, organisation and company, you need AI champions, so people who are trained, understand what is going on and can help the organisation, company or city council to adopt. This does not take mega amounts of central funding, and that would be the wrong way to do it. I think that we are totally agreed on this. We need to empower the grassroots.

DW
Professor Lawrence120 words

On top of that, it is often the universities that are less well known that have been doing this and not been rewarded for doing this. I was doing the REF impact thing, where we review what universities have done. Universities such as Bournemouth or Lincoln are engaged with their local communities—Lincoln with farmers and farming robotics, and Bournemouth with local hospitals—but they are not being rewarded under how we reward universities for doing that. It is not just about Cambridge and Southampton. It is about every higher education institution, many of which were founded in the first place to deal with these types of challenges as they arose in the industrial revolution, yet we have lost track of that.

PL

I am really encouraged by the idea that we do not have to be talking about investment in these astronomic terms to change the dial. The £500 million investment for the sovereign AI fund could potentially have a significant impact. I am hopeful of that.

Dame Wendy Hall3 words

It could, yes.

DW

What would we need to do? What would that fund need to do for it to be impactful?

Dame Wendy Hall90 words

You would be funding people to build their own models. There is beginning to be a process: clean your data up. We always talk about AI and do not talk enough about data. Neil and I are both steeped in the data world. You have to have good, clean AI. I would challenge Neil a little bit—you said something like, “We can build software so easily now.” We can, but you have to check it every step of the way. These things are not, and never will be, 100% accurate.

DW

Do we need professional standards then?

Dame Wendy Hall4 words

What do you mean?

DW

I mean in terms of having professional standards in the development of AI.

Dame Wendy Hall2 words

Yes, absolutely.

DW
Professor Lawrence8 words

That has been a problem in software already.

PL
Dame Wendy Hall205 words

You cannot assume that the AI that is being built at the moment is going to do what you tell it to do or give you the answers you need. It is not like a calculator. It is about putting in the infrastructure locally. I think that there will be quite a revolution in terms of data centres coming up. There will be smaller data centres that will use less electricity, or they will develop the engineering required to turn a data centre into a heat-supplying function. We will all want data centres in our towns and villages because they will supply the heat for the local area. I can see a national energy grid made up of data centres in the future. India is doing work on this already. We could work with India. There is so much that people, organisations and companies can do to help themselves, with encouragement. That was where the adoption was going in the Hall-Pesenti plan. It was about helping people to help themselves. Someone talked about nurses, and those are the sorts of things: every hospital needs its AI champions in every sector so that you can start building it up. Central imposition is not going to work.

DW
Chair15 words

We are going to come to Alison Griffiths to round up on the final question.

C
Alison GriffithsConservative and Unionist PartyBognor Regis and Littlehampton37 words

What single Government action would most strengthen the UK’s competitiveness in frontier AI? If the Government do nothing, what risk does the UK face in the global AI race that we do not want to be in?

Dame Wendy Hall200 words

If the Government do nothing, we will slip off the radar. We are living on a legacy that we need to build up and maintain. I really do not think it is about—and I say it again—central funding for things. It could be funding that is distributed to the people who are going to develop the examples and models that other people can then use, just like we built the web. Do you remember a time when nobody knew how to build a website? You learned by following what other people had done. You used to have to pay a fortune to build a website and now anyone can do it. It is that sort of innovation and support for innovation that is absolutely crucial, as well as getting some good advice into Government. I do not know where our Government get their AI advice from. There are a huge number of people who work in DSIT. They are all very good, well-intentioned people, but where are they getting the advice from about what to do with the money? There is no AI Council any more, so there is no independent advice, as far as I can see, going in.

DW
Professor Lawrence286 words

It is a great question and difficult to find a single point. To come back to something I touched on in the beginning, I view this as a supply and demand thing with an innovation pipeline. We have an extraordinary technology and a lot of the things that have been done are great, but you can see that all the advice is on the supply side. A lot of that comes through lobbying that Government should invest or have partners that are announceable. The single intervention is on the demand side. We do not actually have to think a lot about the supply side. All these companies have enormous lobbying groups now and that has grown tremendously over the last five years. It would be very interesting to see those figures, as they have seen the potential of us as a market. To a large extent, it is more about pushing them back a little bit and leaving the space for UK small and medium enterprises. A very good example is supporting the CMA in what is a pro-innovation agenda and realising that that is not pro-big tech; it is pro-innovation. The idea that big tech seems to have carried out that the CMA should be pro-big tech, because it is pro-innovation, is disastrous. We need to look back at that and say that we have an amazing Act, the Digital Markets, Competition and Consumers Act, that gives the CMA tremendous powers to create a space for small and medium enterprises that are creating solutions that are key to the challenges that UK citizens have. We need to enact those powers without enormous amounts of new legislation or regulation, and to enable UK businesses to flower.

PL

I fully concur on those CMA comments. On the AI Safety Institute, now the AI Security Institute, and to your point about UN global governance on AI, what do you think our Government should be doing more of in terms of interacting with those? There are some big risks out there, and I am not sure that the new Government are serious enough about that. What pointers could you give us, as a bunch of politicians, on that?

Dame Wendy Hall216 words

The UK should absolutely stand up and back the global governance initiatives that the UN is taking. I believe that we are, in terms of the people who work in that area and the DSIT international people. It needs Government support. The geopolitical problem is that the US is saying, “We don’t want anything to do with it,” which is the tension we have in everything at the moment. I believe that there is a huge gap globally in terms of governance of this area. We need standards that people will agree to, in terms of testing and assurance. It is simple things such as, “Don’t we have the right to be told whether something we are looking at was produced by AI?” Do we have to work it out for ourselves? As this technology gets smarter and smarter, is it up to us as individuals? It is scary for people not to know. There is no way of being 100% exact, but we need some assurance—sorry, I keep using that word—about what we are looking at: “Is this video real? Do I believe it?” Of course there are hybrids, but we have a right to know whether something has been produced and whether the voice we are talking to is a human voice or not.

DW
Chair63 words

I want to thank both of you for your extremely enlightening evidence and the help that you have provided to this Committee, as we go through our AI inquiry, in identifying some of those strengths and weaknesses. I am going to have to conclude the panel. I am afraid that we are already over time, but I want to thank you very much.

C