Civil society groups call on EU to put human rights at centre of AI Act

by -128 Views

Human Rights Watch and 149 other civil society organisaitons are urging European Union (EU) institutions to enhance the protections for people’s fundamental rights in its upcoming Artificial Intelligence Act (AIA).

In May 2023, committees in the European Parliament voted through a raft of amendments to the AIA – including a number of bans on “intrusive and discriminatory” systems as well as measures to improve the accountability and transparency of AI deployers – which were later adopted by the whole Parliament during a plenary vote in June.

However, the amendments only represent a “draft negotiating mandate” for the European Parliament, with behind-closed-door trialogue negotiations set to begin between the European Council, Parliament and Commission in late July 2023 – all of which have adopted different positions on a range of matters.

The Council’s position, for example, is to implement greater secrecy around police deployments of AI, while simultaneously attempting to broaden exemptions that would allow it to be more readily deployed in the context of law enforcement and migration.

The Parliament, on the other hand, has opted for a complete ban on predictive policing systems, and favours expanding the scope of the AIA’s publicly viewable database of high-risk systems to also include those deployed by public bodies.  

Ahead of the secret negotiations, Human Rights Watch, Amnesty International, Access Now, European Digital Rights (EDRi), Fair Trials and dozens of other civil society groups have urged the EU to prohibit a number of harmful, discriminatory or abusive AI applications; mandate fundamental rights impact assessments throughout the lifecycle of an AI system; and to provide effective remedies for people negatively affected by AI, among a number of other safeguards.

“In Europe and around the world, AI systems are used to monitor and control us in public spaces, predict our likelihood of future criminality, facilitate violations of the right to claim asylum, predict our emotions and categorise us, and to make crucial decisions that determine our access to public services, welfare, education and employment,” they wrote in a statement.

“Without strong regulation, companies and governments will continue to use AI systems that exacerbate mass surveillance, structural discrimination, centralised power of large technology companies, unaccountable public decision-making and environmental damage.

“We call on EU institutions to ensure that AI development and use is accountable, publicly transparent, and that people are empowered to challenge harms.”

National security and military exemptions

For the statement signatories, a major point of contention around the AIA as it stands is that national security and military uses of AI are completely exempt from its provisions, while law enforcement uses are partially exempt.

The groups are therefore calling on the EU institutions to draw clear limits on the use of AI by national security, law enforcement and migration authorities, particularly when it comes to “harmful and discriminatory” surveillance practices.

They say these limits must include a full ban on real-time and retrospective “remote biometric identification” technologies in publicly accessible spaces, by all actors and without exception; a prohibition on all forms of predictive policing; a removal of all loopholes and exemptions for law enforcement and migration control; and a full ban on emotion recognition systems.

They added the EU should also reject the Council’s attempt to include a blanket exemption for systems developed or deployed for national security purposes; and prohibit the use of AI in migration contexts to make individualised risk assessments, or to otherwise “interdict, curtail and prevent” migration.

The groups are also calling for the EU to properly empower members of the public to understand and challenge the use of AI systems, noting it is “crucial” that the AIA develops an effective framework of accountability, transparency, accessibility and redress.

This should include an obligation on all deployers of AI to conduct and publish fundamental rights impact assessments before each deployment of a high-risk AI system; to register their use of AI in the publicly viewable EU database before deployment; and to ensure that people are notified and have a right to seek information when affected by AI systems.

All of this should be underpinned by meaningful engagement with civil society and people affected by AI, who should also have a right to effective remedies when their rights are infringed.

Big tech lobbying

Lastly, the undersigned groups are calling for the EU to push back on big tech lobbying, noting that negotiators “must not give in to lobbying efforts of large tech companies seeking to circumvent regulation for financial interest.”

In 2021, a report by Corporate Europe Observatory and LobbyControl revealed that big tech firms now spend more than €97m annually lobbying the EU, making it the biggest lobby sector in Europe ahead of pharmaceuticals, fossil fuels and finance

The report found that despite a wide variety of active players, the tech sector’s lobbying efforts are dominated by a handful of firms, with just 10 companies responsible for almost a third of the total tech lobby spend. This includes, in ascending order, Vodafone, Qualcomm, Intel, IBM, Amazon, Huawei, Apple, Microsoft, Facebook and Google, which collectively spent more than €32m to get their voices heard in the EU.

Given the influence of private tech companies over EU processes, the groups said it should therefore “remove the additional layer added to the risk classification process in Article 6 [in order to] restore the clear, objective risk-classification process outlined in the original position of the European Commission.”

Speaking ahead of the June Parliament plenary vote, Daniel Leufer, a senior policy analyst at Access Now, told Computer Weekly that Article 6 was amended by the European Council to exempt systems from the high-risk list (contained in Annex Three of the AIA) that would be “purely accessory”, which would essentially allow AI providers to opt-out of the regulation based on a self-assessment of whether their applications are high-risk or not.

“I don’t know who is selling an AI system that does one of the things in Annex Three, but that is purely accessory to decision-making or outcomes,” he said at the time. “The big danger is that if you leave it to a provider to decide whether or not their system is ‘purely accessory’, they’re hugely incentivised to say that it is and to just opt out of following the regulation.”

Leufer added the Parliament text now includes “something much worse…which is to allow providers to do a self-assessment to see if they actually pose a significant risk”.

Sumber: www.computerweekly.com

No More Posts Available.

No more pages to load.