
Amy Wilson Wyles
April 2026
On 14 April 2026, the Business and Trade Select Committee opened oral evidence for its inquiry into Artificial Intelligence, Business and the Future of the Workforce with a panel focused on frontier AI and policy. Giving evidence were Professor Neil Lawrence, DeepMind Professor of Machine Learning at the University of Cambridge, and Dame Wendy Hall, Director of the Web Science Institute at the University of Southampton.
Lawrence opened with a challenge to the framing of the inquiry itself. Describing AI as a race, he argued, is unhelpful and furthermore is actively distorting the choices governments make.

To illustrate the point he reached for an unlikely reference: Monty Python's Life of Brian. In the film, people in desperate and uncertain circumstances become dangerously susceptible to false prophets: gravitating towards anyone who offers a simple, confident answer. Lawrence's argument was that governments facing the AI moment are doing exactly the same thing: latching onto tangible, high-profile interventions like big infrastructure announcements, headline partnerships with dominant technology firms, sovereignty units and action plans because they provide a sense of security in what is genuinely uncertain territory.
The problem, he suggested, is that those interventions are largely shaped by people with supply-side interests. The advisers in the room tend to be the ones providing infrastructure, building models or selling platforms. The voices that get heard are those of the largest technology companies, whose version of what AI should do for the UK reflects their own commercial interests rather than the real problems facing British citizens, businesses and public services. The result is a strategy that keeps looking across the Atlantic for solutions rather than investing in the ecosystems already here.
Dame Wendy Hall was equally direct. The race narrative, she argued, is partly a construction driven by hype, competitive marketing and the need of large AI companies to justify their valuations. Some of the most alarming capability claims she had heard recently echoed announcements she remembered from 2019, when the same organisations declared their models too dangerous to release. There is genuine science here, and genuine cause for attention. There is also, she suggested, an enormous amount of noise.
Both witnesses were clear that stepping back from the race framing does not mean disengaging from AI. It means being more precise about where Britain has a credible advantage and backing it seriously.
That advantage starts, unmistakably, with education and research. The UK has been teaching and researching AI for four decades. It has some of the world's strongest university departments, a concentration of expertise that far exceeds what most comparable countries can offer, and a track record of producing foundational ideas that the largest technology companies are still building on. As Hall put it, AI capability does not grow on trees. Training a world-class AI professor takes years. Training the researchers and engineers those professors produce takes years more. That pipeline is not something that can be bought quickly - it has to be grown, and the UK has been growing it for longer than almost anywhere else.
Lawrence pointed to something he called Campus UK: the extraordinary density of educational and research expertise that can be reached by train within the United Kingdom, from Southampton to Edinburgh, Cambridge to Manchester. In terms of concentration of knowledge within accessible geography, he argued, the UK is second only to the United States. That is a profound structural asset and it is currently being underused.

Lawrence was direct: almost all of the advice flowing into government on AI comes from people with supply-side interests. The announcements that dominate the public conversation: data centre investment, compute targets, partnerships with Microsoft, Google, Amazon and OpenAI - are exactly what those advisers would recommend, because they benefit directly from them. The demand side barely gets a look in.
The demand side is where the real economic prize lies. It is about whether mainstream businesses, public services, local councils and institutions can actually adopt AI in ways that improve their productivity and solve real problems. It is about whether a nurse can help design a tool that makes nursing better. Whether a planning officer can build an application that improves local planning. Whether a smaller business with no dedicated technology team can find a practical route into AI adoption without needing to become a technology company first.
Lawrence’s team at Cambridge had been working with a local council, at a cost of around £20,000, to build planning tools developed and deployed by the planning officers themselves. Months later, the government announced that Google would solve planning nationally. The contrast, he suggested, captures something fundamental about the current direction of travel: a preference for large, centralised, prestigious interventions over the quieter, distributed, locally-rooted work that is more likely to create durable value.
The UK's higher education institutions are typically evaluated on research prestige, international rankings and headline academic output. Lawrence argued that this rewards the wrong things. Universities should be valued just as much (perhaps more) for their engagement with local businesses, councils and communities. For helping regional employers adopt new technologies. For building the kind of practical capability that spreads across an economy rather than concentrating in a handful of elite institutions.
He pointed to examples that rarely make the national conversation: the University of Lincoln working with local farmers on agricultural robotics; Bournemouth working with local hospitals on healthcare applications. These are institutions doing exactly what universities were originally founded to do - responding to the practical challenges of their time and place. They are not being rewarded for it. A more serious national AI strategy would change that, explicitly valuing the work of universities that engage deeply with their local economies alongside the work of those that publish in Nature.
Hall reinforced the point with a practical prescription: every sector, every significant organisation and every local authority needs AI champions who are trained to understand the technology and equipped to help their organisation adopt it from the inside. This does not require enormous centralised funding. It requires grassroots investment in capability, confidence and knowledge-sharing. It is, she suggested, exactly the kind of work that universities at every level are well placed to support.
For several years, the assumption underlying much AI investment has been that innovation requires training large models, an activity that demands enormous capital, specialist infrastructure and scale that only a handful of organisations can provide. That assumption, Lawrence argued, is becoming less reliable.
The rapid development of orchestration technologies - systems that combine, direct and apply existing AI models rather than training new ones - is fundamentally changing who can participate. Open-source orchestration tools are increasingly accessible. The ability to build useful, domain-specific AI applications no longer requires access to the largest models or the deepest pockets. A university team, a public body, a small business with relevant domain expertise can now build things that would have been impossible two years ago.
If the next wave of AI value comes more from orchestration and application than from training ever-larger foundation models, the field opens up significantly. That is good news for the UK, whose strengths lie precisely in the areas that benefit most: domain knowledge, institutional depth, research capability and the practical understanding of how AI needs to work in specific, regulated, human environments.

Both witnesses pointed to AI assurance (the testing, evaluation and measurement of AI systems before and after deployment) as an area where the UK has a genuine and underexploited competitive advantage.
At the moment, Hall noted, AI products are being released to the general public without meaningful independent evaluation. The experiment is running, and it cannot easily be undone. Systems that could affect personal safety, institutional integrity or national security are entering use without the kind of rigorous testing that other high-stakes industries take for granted.
The UK is unusually well placed to lead on this. It has the National Physical Laboratory, respected independent institutions, a strong tradition of standards and measurement, and the foundation laid by the AI Safety Institute established after the Bletchley Summit. Lawrence and Hall both pointed to this as an area where relatively modest investment could build genuine global authority - not by racing to build the most powerful models, but by becoming the world's most trusted place to evaluate them.
For businesses, the commercial implications are significant. As AI tools become embedded in regulated sectors like financial services, healthcare, legal, and public services, the ability to demonstrate that those tools have been independently tested and verified will become a competitive differentiator. The organisations and jurisdictions that build credible assurance infrastructure now will be well positioned when that demand arrives at scale.
The UK consistently underestimates its own institutions, its own businesses and its own people. It assumes that solutions must arrive from outside, usually from the largest US technology platforms, rather than being developed through the organisations, universities and enterprises already operating here. That assumption shapes policy, and policy shaped by it tends to reinforce dependence rather than build capability.
When government strategy is built primarily around announcing partnerships with Microsoft, Amazon and Google, it signals to domestic businesses and researchers that the real action is happening elsewhere. It crowds out the space that smaller enterprises, universities and public institutions need to experiment, build and demonstrate what is possible. And it means that the problems citizens actually face in health, in education, in local services get less attention than the agendas of companies whose interests are not always aligned with those of UK citizens.
The remedy, both witnesses suggested, is not hostility to large technology companies. It is proportion. Those companies have a role. They are not, however, going to solve the UK's productivity challenge, close its adoption gap or build the distributed, locally-rooted AI capability the country needs. That work has to happen here, built by the people who understand the problems.
Hall made one further point that landed with particular force. The increasing difficulty of collaborating with Chinese researchers and using Chinese AI tools driven by geopolitical pressure and network security requirements is, she argued, a real constraint on the UK's options. The analogy she reached for was blunt: telling the world it can only use pharmaceuticals made by American companies, even if Chinese ones would be more effective. That is not a comfortable comparison. But it captures something important about the cost of the current approach: that some of the limitations on UK AI development are not being imposed from outside, but chosen, and that the trade-offs involved deserve a more honest public conversation than they are currently getting.
For Boardwave members, the arguments in this session land in several practical places.

The orchestration point is the most immediately actionable. If you have been watching the AI landscape and concluding that meaningful participation requires resources or infrastructure you do not have, that calculation is changing. The ability to build domain-specific AI applications using orchestration tools is becoming genuinely accessible. The companies that move first, combining their sector expertise with the new tooling, will build advantages that are harder to replicate than anything purchased from a large platform provider.
The assurance argument is worth taking seriously in any regulated sector. The question of how to demonstrate that an AI system is safe, reliable and compliant is not yet well answered but it will be, and the organisations that get ahead of it will find it easier to deploy AI at scale, move faster through procurement processes and build the kind of institutional trust that makes adoption stick.
And the broader confidence argument resonates directly with what Boardwave hears from its network. The peer knowledge, shared experience and practical intelligence that flow through a community of 2,000 European tech CEOs are part of how that confidence gets built - by learning from the founders and operators who have built significant businesses from exactly the same starting point. That is what spinning the flywheel faster looks like in practice.
The UK has the ingredients Lawrence and Hall described. Strong universities, deep domain expertise, respected institutions, a research legacy that the world's largest technology companies are still building on. The question is whether the policy environment, and the businesses operating within it, can learn to back those strengths with the same conviction that other countries bring to backing theirs.
This article draws on oral evidence given to the Business and Trade Select Committee on 14 April 2026 as part of its inquiry into Artificial Intelligence, Business and the Future of the Workforce.

























































%201.webp)




.webp)













%201.webp)




.webp)


