MPs warned of AI arms race to the bottom

by -126 Views

Big tech firms are willing to throw out artificial intelligence (AI) safeguards to keep up with competitors, risking an “arms race” to the bottom, MPs have been warned.

The House of Commons Science and Technology Committee launched an inquiry into the UK government’s proposed “pro-innovation framework for regulating AI” in October 2022, to scrutinse its approach and to ensure the technology is used in an ethical and responsible way.

Addressing the committee in the first session of its inquiry in late January 2023, Michael Osborne, a professor of machine learning at Oxford University and co-founder of responsible AI platform Mind Foundry, said that the emergence of an AI arms race is a “worrying development”, because it signals the start of a race to the bottom in terms of safeguards and standards.

While arms races usually refer to military competition between nation-states – which is happening with AI – Osborne said the civilian applications of the tech could bestow huge advantages on whoever is able to develop “a really sophisticated AI” first.

Noting that Google founders Larry Page and Sergei Brin were very recently called back into the company (after leaving their daily roles in 2019) to consult on its AI future, Osborne added the release of ChatGPT by OpenAI in November 2022 has placed a “competitive pressure” on big tech firms developing similar tech that could be dangerous.

“Google has said publicly that it’s willing to ‘recalibrate’ the level of risk and assumes in any release of AI tools due to the competitive pressure from OpenAI,” he said.

“The big tech firms are seeing AI as something as very, very valuable, and they’re willing to throw away some of the safeguards…and take a much more ‘move fast and break things’ perspective, which brings with it enormous risks.”

He added, however, that the AI arms race is not limited to tech firms jostling for dominance in the private sector, and there is already a geopolitically motivated AI arms race underway between the US and China, as “the actor that masters a particular AI technology first may have enormous strategic advantages” in both military and economic terms.

“There seems to be this willingness to throw safety and caution out the window and just race as fast as possible to the most advanced AI,” he said. “I think those dynamics are ones that we should absolutely rule out as soon as possible [via regulation]. We really need to adopt the precautionary principle and try and play for as much time as we can.”

Osborne added while international consensus around AI regulation will be tough to achieve, it is possible: “There’s some reason for hope in that we’ve been pretty good at regulating the use of nuclear weapons, at least for several decades, where there is a similar sort of strategic advantage if someone was able to use them successfully.”

In June 2018, angel investor Ian Hogarth predicted the rise of what he called “AI nationalism”, arguing that the transformational potential of AI will prompt “an accelerated arms race…between key countries…[where] we will see increased protectionist state action to support national champions, block takeovers by foreign firms and attract talent”.

He added: “While the idea of AI as a public good provides me personally with a true north, I think it is naive to hope we can make a giant leap there today, given the vested interests and misaligned incentives of nation states, for-profit technology companies and the weakness of international institutions.”

Social changes prompted by AI

Osborne told MPs that, outside the need to avoid an arms race over the technology, policy makers also need to begin preparing for the societal changes AI could usher in, which he compared with the paradigm shifts prompted by the automobile or the internet.

“You might say that AI is already at a similar scale, of being able to impact on a very wide variety of different human endeavours,” he said. “When the world changes a lot, of course, there are risks that are posed, and there are winners and losers, so we do have to prepare ourselves for those rapid changes.”

He added that in the near future, it would be reasonable to expect a high level of economic disruption from AI, “including much churn in labour markets as tasks and occupations become more automated”.

Michael Cohen, a DPhil scholar at Oxford University studying AI safety, added that AI would enable the “economic output of humans to be produced much more cheaply” if it became advanced enough, which could have a similar impact on economic output as the combustion engine.

However, he said while the combustion engine replaced horses because it could wholly replicate their role in the economy as transport, AI is still too rudimentary to fully replicate a wide range of complex human activities.

Osborne added while there are genuine examples where AI could replace human labour – for example, operating in extreme environments such as space – AI should be thought of and conceptualised as an augmentation to human labour.

“[Although] AI may not be replacing wholesale occupations…certainly the technology we have already has enormous potential for transformation across the economy and in society more broadly,” he said.

Given the diversity of AI as a technology, Osborne concluded that any regulation must be as flexible and non-perspective as possible in its definition of what constitutes an AI system, so that certain use cases (and their wider implications) are not dismissed or overlooked.

Sumber: www.computerweekly.com

No More Posts Available.

No more pages to load.