Scrutinising AI requires holistic, end-to-end system audits

[ad_1]

Organisations must conduct end-to-end audits that consider both the social and technical aspects of artificial intelligence (AI) to fully understand the impacts of any given system, but a lack of understanding around how to conduct holistic audits and the limitations of the process is holding back progress, say algorithmic auditing experts.

At the inaugural International Algorithmic Auditing Conference, hosted in Barcelona on 8 November by algorithmic auditing firm Eticas, experts had a wide-ranging discussion on what a “socio-technical” audit for AI should entail, as well as various challenges associated with the process.

Attended by representatives from industry, academia and the third sector, the goal of the conference is to create a shared forum for experts to discuss developments in the field and help establish a roadmap for how organisations can manage their AI systems responsibly.

Those involved in this first-of-its-kind gathering will go on to Brussels to meet with European Union (EU) officials and other representatives from digital rights organisations, so they can share their collective thinking on how AI audits can and should be regulated for.

What is a socio-technical audit?

Gemma Galdon-Clavell, conference chair and director of Eticas, said: “Technical systems, when they’re based on personal data, are not just technical, they are socio-technical, because the data comes from comes from social processes.”

She therefore described a socio-technical audit as “an end-to-end inquiry into how a system works, from the moment that you choose the data that is going to be training your system, up until the moment that the algorithmic decision is handled by a human being” or otherwise impacts someone.

She added if organisations only focus on the technical aspects of a system, and forget about the social interaction that the system produces, “you’re not really auditing [because] you’re not looking at the harms, you’re not looking at the context”.

However, the consensus among conference attendees was that organisations are currently failing to meaningfully interrogate their systems.

Shea Brown, CEO of BABL AI, gave the example of the human-in-the-loop as an often overlooked aspect of socio-technical audits, despite a significant amount of risk being introduced to the system when humans mediate an automated decision.

“Most of the risk that we find, even beyond things like bias, are the places where the algorithm interacts with a person,” he said. “So if you don’t talk to that person, [you can’t] figure out ‘what’s your understanding about what that algorithm is telling you, how are you interpreting it, how are you using it?’”

Another significant part of the problem is the fact that AI systems are often developed in a haphazard fashion which makes it much hard to conduct socio-technical audits later on.

“If you spend time inside tech companies, you quickly learn that they often don’t know what they’re doing,” said Jacob Metcalf, a tech ethics researcher at Data & Society, adding that firms often will not know basic information like whether their AI training sets contain personal data or its demographic make-up.

“There’s some really basic governance problems around AI, and the idea is that these assessments force you to have the capacity and the habit of asking, ‘how is this system built, and what does it actually do in the world?’”

Galdon-Clavell added that, from her experience of auditing at Eticas, “people don’t document why things are done, so when you need to audit a system, you don’t know why decisions were taken…all you see is the model, you have no access to how that came about”.

A standardised methodology for adversarial testing

To combat the lack of internal knowledge around how AI systems are developed, the auditing experts agreed on the pressing needed for a standardised methodology for how to conduct a socio-technical audit.

They added that while a standardised methodology currently does not exist, it should include practical steps to take at each stage of the auditing process, but not be so prescriptive that it fails to account for the highly contextual nature of AI.

However, digital rights academic Michael Veale said standardisation is a tricky process when it comes to answering inherently social questions.

“A very worrying trend right now is that legislators such as the European Commission are pushing value-laden choices around fundamental rights into SDOs [standards development organisations],” he said, adding that these bodies have a duty to push back and refuse any mandate for them to set standards around social or political issues.

“I think the step really is to say, ‘well, what things can we standardise?’. There may be some procedural aspects, there may be some technical aspects that are suitable for that, [but] it’s very hazardous to ever get into a situation where you separate the political from the technical – they are very deeply entwined in algorithmic systems,” added Veale.

“A lot of our anxieties around algorithms represent our concerns with our social situations and our societies. We cannot pass those concerns off to SDOs to standardise away – that will result in a crisis of legitimacy.”

Another risk of prescriptive standardisation, according to Brown, is that the process descends into a glorified box-ticking exercise. “There’s a danger that interrogation stops and that we lose the ability to really get at the harms if they just become standardised,” he said.

To prevent socio-technical audits from becoming mere box-ticking exercises, as well as ensuring those involved do not otherwise abuse the process, Galdon-Calvell posited that audits should be adversarial in nature.

“You can have audits that are performed by people outside of the system, by exploiting the possibilities of the system to be reverse-engineered, and so through adversarial approaches you could expose when audits have been used as a tick-box exercise, or as a non-meaningful inspection exercise,” she said, adding Eticas and others in attendance would be hashing out how this process could work in the coming weeks.

Public sector woes

Problems around socio-technical auditing are also exacerbated for public sector organisations because, even if an AI supplier has adequately documented the development process, they do not have the capacity to scrutinise it, or are otherwise prevented from even inspecting the system due to restrictive intellectual property (IP) rights.

“In many cases, the documentation simply doesn’t exist for people in the public sector to be able to understand what’s going on, or it isn’t transferred, or there’s too much documentation and no one can make sense of it,” said Divij Joshi, a doctoral researcher at University College London.

“When people don’t want to tell you how [an algorithm] is working, it’s either because they don’t want to, or because they don’t know. I don’t think either is acceptable”
Sandra Wachter, Oxford Internet Institute

“It’s quite scary to me that in the public sector, agencies that ought to be duly empowered by various kinds of regulations to actually inspect the technologies they’re procuring, aren’t able to do so… because of intellectual property rights.”

Ramak Molavi, a senior researcher at the Mozilla Foundation, also criticised the public procurement setup, adding the public sector’s general lack of knowledge around AI means “they are totally dependent on the suppliers of knowledge, they take [what they say] as reality – they get an opinion but for them, it’s not an opinion, it’s a description”.

Giving the example of a local state government in Australia that had contracted an AI-powered welfare system from a private supplier, Jat Singh, a research professor at the University of Cambridge, added that, after public officials were denied access to inspect a particular welfare decision on the basis of IP, the New South Wales government simply introduced a new provision into the tendering process that meant the company had to give up the information.

During the House of Lords Home Affairs and Justice Committee inquiry into the use of advanced algorithmic technologies by UK police, Sandra Wachter, a professor of technology and regulation at the Oxford Internet Institute, made a similar point, arguing that public sector buyers should use their purchasing power to demand access to suppliers’ systems to test and prove their claims about, for example, accuracy and bias.

“When people don’t want to tell you how [an algorithm] is working, it’s either because they don’t want to, or because they don’t know. I don’t think either is acceptable, especially in the criminal justice sector,” she said, adding that while a balance needs to be struck between commercial interests and transparency, people have a right to know how life-changing decisions about them are made. 

“When people say it’s just about trade secrets, I don’t think that’s an acceptable answer. Somebody has to understand what’s really going on. The idea that liberty and freedom can be trumped by commercial interests, I think, would be irresponsible, especially if there is a way to find a good middle ground where you can fully understand what an algorithm is doing … without revealing all the commercial secrets.”

Limits of auditing

Galdon-Clavell said auditing should be thought of as just one tool – albeit an important one – in making the deployment of AI more accountable.

“AI auditing is at the heart of the effort to ensure that the principles we have developed around AI are translated into specific practices that mean the technologies that make decisions about our lives actually go through a process of ensuring that those decisions are fair, acceptable and transparent,” she said.

“AI auditing is at the heart of the effort to ensure that … the technologies that make decisions about our lives go through a process of ensuring that those decisions are fair, acceptable and transparent”
Gemma Galdon-Clavell, Eticas

Jennifer Cobbe, a research associate at the University of Cambridge, added it was important to remember that auditing alone cannot solve all the issues bound up in the operation of AI, and that even the best-intentioned audits cannot resolve issues with systems that are inherently harmful to people or groups in society.

“We need to be thinking about what kinds of things are beyond those mechanisms, as well as about democratic control. What kinds of things do we simply say are not permitted in a democratic society, because there’re simply too dangerous?” she said.

While the current EU AI Act, for example, has attempted to draw red lines around certain AI use cases considered to be “an unacceptable risk” – including systems that distort human behaviour or those that allow the remote, real-time biometric identification of people in public places – critics have previously shared concerns that the prohibition does not extend to the use of AI by law enforcement.  

Outside of prohibiting certain AI use cases, auditing also needs to be accompanied by further measures if systems are going to be seen as remotely trustworthy.

“A critically important and often overlooked goal of auditing and assessment is to give harmed parties or impacted communities an opportunity to contest how the system was built and what the system does,” according to Metcalf. “If the point of doing the assessment is to reduce the harm, then the way the assessment is structured needs to provide a foothold for the impacted parties to demand a change.”

He added that the end goal was greater democratic control of AI and other algorithmic technologies: “This is a moment where we need to be asserting the right to democratically control these systems. AI is for people to have better lives. It’s not for corporations to limit our futures.”

Socio-technical auditing requirements should also, according to Mozilla’s Molavi, be accompanied by strong enforcement. “It’s a political question if you want to fund enforcement or not,” she said, adding that in the privacy and data protection spaces, for example, “we have nearly nobody enforcing the law”.

[ad_2]
Source link

About rtsuggests

Check Also

EU common charger rule means big changes ahead – including for the iPhone

[ad_1] Image: Getty Images/NurPhoto In an effort to make the tech industry more environmentally conscious, …

Leave a Reply

Your email address will not be published. Required fields are marked *

About Us | ccpa california consumer privacy act