The Competition and Markets Authority (CMA) has begun an initial consultation looking into competition and consumer protection considerations in the development and use of artificial intelligence (AI) foundation models.
The development of AI touches on a number of important issues, including safety, security, copyright, privacy and human rights, as well as the ways markets work.
The government has asked regulators, including the CMA, to investigate how AI can be supported against five overarching principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress.
The CMA said its initial investigation would focus on examining how the competitive markets for foundation models and their use could evolve. It will also look into the opportunities and risks these scenarios could bring for competition and consumer protection. The outputs of these investigations will be a set of guiding principles, which it said will support competition and protect consumers as AI foundation models develop.
Sarah Cardell, chief executive of the CMA, said: “It’s crucial that the potential benefits of this transformative technology are readily accessible to UK businesses and consumers, while people remain protected from issues like false or misleading information. Our goal is to help this new, rapidly scaling technology develop in ways that ensure open, competitive markets and effective consumer protection.”
Organisations and individuals wishing to submit evidence have until 2 June 2023 to do so. The CMA aims to publish a report to set out its findings in September 2023.
Rashik Parmar, BCS, The Chartered Institute for IT
Discussing the CMA work on foundational AI models, Verity Egerton-Doyle, counsel and UK co-head of technology sector at law firm Linklaters, said: “The CMA is keen to skill up and understand what role there is for competition law in this important new area. The EU’s digital markets act that came fully into force this week does not cover generative AI, and the CMA no doubt sees this as an opportunity to be leading the global debate on these issues – along with the US FTC, which is already looking at the area.”
BCS, The Chartered Institute for IT, has produced a policy paper, Helping AI grow up without pressing pause, where it discusses the need to drive forward AI development with guardrails such as sandboxing and independent assessment of AI systems.
While a number of organisations are calling for AI development to be paused, Rashik Parmar, chief executive of BCS, believes the development work should continue. “We can’t be certain every country and company with the power to develop AI would obey a pause, when the rewards for breaking an embargo are so rich,” he said. “So, instead of trying to smother AI, only to see it revived in secret by bad actors, we need to help it grow up in a responsible way.”
Parmar called for the AI industry and policymakers to work together to agree standards of transparency and ethical guardrails, which are designed and deployed by AI professionals.
“We’ve got a generational opportunity to make something that, pretty soon, can solve a huge number of the world’s problems and be a trusted partner in our life and work – let’s take it.”