The ethics of artificial intelligence has been widely discussed in the UK public sector, and there is a surfeit of high-level guidance available to technology teams pursuing AI projects. But this guidance needs to be translated into practical contexts, panellists at Tech Monitor‘s recent Digital Responsibility Summit agreed, if it is to have a meaningful impact.
There has been plenty of discussion of the ethics of AI, said Mark Durkee, team lead at the Centre for Data Ethics and Innovations (CDEI), during a panel entitled ‘The ethics of emerging technology: does the public sector need more support?’.
“We’re definitely not in a position where lots of things are being done with AI and data and no one’s thinking about ethics at all,” he said. “In fact, in many ways there is a lot of thinking about ethics.”
One challenge for public sector technology teams as they put AI into practice, however, is picking through many sources guidance on ethics and deciding what is relevant for their project. “Who should they be listening to?”
Another is translating high-level ethics guidance into a practical context, Durkee said. “How do they start to map the high-level stuff to help people working in local government, working in policing, working in lots of use cases where there are lots of teams around the country doing similar things?”
“Some of the principles are very sort of motherhood and apply pie – no one would disagree with them,” Copland agreed. “But, how do I put it into action?”
All sessions from the Digital Responsibility Symposium are available to watch on demand. Register here.
Content from our partners
Busy public sector professionals don’t have the time to browse through long reports from think tanks, Copland added. When creating ethics guidance, organisations shouldn’t just say ‘local government should’ or ‘the police should’, he argued. Instead, they should make their guidance relevant to particular roles in specific organisational contexts.
“We’ve got to get that specific advice to the right person in the right context.”
Another concern is that AI ethics discussions are divorced from broader ethical governance of public sector bodies. “We might have a data science team making decisions about things that really are broader organisational decisions, and maybe need to find a way of bridging that,” said Durkee.
Healthcare providers, for example, already have well developed codes of medical ethics and their decisions about AI ethics should tie into those guidelines.
Engaging with the public on AI ethics
Involving citizens in discussions around the appropriate use of emerging technology can help build trust. But it must be done in meaningful way, said Dr Rick Muir, director of the Police Foundation. “There’s probably limited value in asking people quite high level questions [about a new technology],” he said.
“I think it’s much better to engage people in a kind of deliberation about use cases and to get into some of the detail.”
Muir advises that organisations could bring in a panel of experts who understand the technology as well as a panel of “ordinary people” to talk them through how the technology would work.
“From a policing point of view, you can then say to the community: ‘look, we’ve been through a process on this and we have involved the public in in the discussion and we’re now confident that there’s public support for for doing this’,” he said.
Register here to watch the full Digital Responsibility Symposium on demand.