[ad_1]

Clad in a light black poncho interwoven overlaid with bright, white piping, the model strides down the grey halls of Berlin Central Station as if it were any other runway. Today, though, his audience is not made up of overawed fashion moguls and baying flesh-and-blood photographers, but inert algorithms analysing frame after frame of footage from CCTV cameras fixed on poles and girders far above the concourse. And the design has the desired effect: those facial recognition algorithms struggle to even identify the model as a person, their judgement thrown by the strange contrasts of his cloak.

This is the scenario imagined in a promotional video from Project Ignotum, an initiative from designers Jan Wertel and Markus Mau that aims to use fashion to resist the spread of AI-powered surveillance.

“We got into this topic [after] learning about something called ‘emotional marketing,’” a type of facial analysis application where conclusions are made about a customer’s emotional reaction to a product or service, Wertel explains. The use of this technology for profit raised the hackles of the designers, who were already concerned at the ability of facial recognition companies to easily match the data acquired from CCTV cameras to multiple aspects of an individual’s identity.

The aim of Project Ignotum is to highlight how one might preserve a sense of anonymity in public spaces. The poncho that eventually resulted harnesses the power of ‘adversarial examples,’ inputs inside an image that nudge an object recognition algorithm into misinterpreting what’s in front of them, or ignoring it entirely.

Project Ignotum’s poncho is designed to confuse object recognition algorithms. (Photo courtesy of Project Ignotum)

Finding the right pattern, however, wasn’t easy. After running through several complex permutations, “out of frustration, we projected this very regular, linear grid,” recalls Wertel. “And that suddenly worked.”

Project Ignotum is not alone in its attempts to use fashion to elude facial recognition systems. Other concepts include brightly patterned hoodies, which a team from Facebook and the University of Maryland found reduced the effectiveness of facial recognition systems by half; a similarly garish T-shirt, which another team from Northeastern University found boosted the wearer’s ability to elude analysis by between 52%-63%; and even common make-up contouring techniques, which a group from Ben-Gurion University found reduced detection rates to 1.22%.

Content from our partners
How AI can empower Middle East energy operators to deliver Oil & Gas 4.0

How should enterprises go about exiting their data centre?

The plan to transform patient outcomes in the Middle East through the use of AI

None of these items, however, are intended for mass production. In the case of Ignotum, explains Mau, “it’s not about building a product like we usually do.” Rather, he says, it’s a statement about the lengths an individual has to go to preserve their anonymity against such systems.

Increasingly, that statement is being heard by state legislators and companies around the world. Bans and restrictions on facial recognition have been passed in several US cities and states, with new guidelines being issued on its use by law enforcement in the UK and calls for its abolition recently made in the European Parliament.

It seems, therefore, that privacy-conscious individuals may not need to contemplate a future of wearing garish ponchos or t-shirts through public streets – but rather one where, at least in the West, comprehensive legal frameworks governing the use of facial recognition are enforced.

Global moves to regulate facial recognition technology

That’s certainly the hope of Adam Schwartz, a staff attorney at the Electronic Frontiers Foundation. Right now, though, that future seems altogether distant. “We are seeing news stories about face recognition that are very, very alarming to us every week,” he says.

Such stories largely fall into two categories, explains Schwartz: those that pertain to law enforcement uses of facial recognition, and those in retail. Police use of facial recognition, he argues, has a chilling effect on privacy and free speech while aggravating “inequities we already have in our criminal justice system”. Perhaps its most notable failing in this regard has been the wrongful arrests of at least three men of colour by US police departments, individuals who were incorrectly identified as suspects based on flawed readouts from facial recognition systems.

Commercial use of the technology should also be restricted, argues the EFF attorney. The practice of retail outlets automatically capturing and analysing face prints of customers without their consent, he argues, is not only morally untenable, but also incredibly risky from a cybersecurity standpoint, given the necessity of storing all the data and insights gathered by facial recognition and analysis algorithms. “Invariably, those databases get hacked,” says Schwartz.

That message seems to be getting through to lawmakers. Schwartz and his colleagues point to the example of Illinois, where the state’s Biometric Information Privacy Act mandates that companies in the state require an opt-in from customers before they’re exposed to facial recognition systems.

The law serves as a good example for the kind of restrictions the EFF would like to see controlling the commercial use of facial recognition across the US. “Just this year, we’ve been working on [similar] bills in three states,” he says.

Tough new legal frameworks for facial recognition are also being passed for use of the technology by state and federal institutions. Seventeen cities in the US, including Boston, San Francisco and Oakland, have already banned or severely restricted its use by government departments, while a bill that would severely restrict its use by the federal government is currently being considered by the US Senate.

In Europe, too, legislators are taking an increasingly dim view of mass biometric surveillance. Last October, the European Parliament voted through a resolution calling for an EU-wide ban on the use of facial recognition by law enforcement in public spaces, while Clearview AI was also recently fined by data protection regulators in France and Italy for collecting facial images, among other biometric data, without permission.

Protesters hold a banner during a protest against the use of police facial recognition cameras at Cardiff City Stadium. (Photo by Matthew Horwood/Getty Images)

Turning public opinion

While these developments are grounds for optimism among privacy campaigners, it is also true to say that the use of facial recognition by government departments and private companies remains pervasive. New applications in commercial settings, for example, are emerging all the time, including in spotting potential shoplifters and making payments.

What’s more, while progress seems to have been made in convincing legislators of the need for robust legal frameworks for the technology’s use by government and law enforcement, these are not immune from criticism. In the UK, for example, the College of Policing’s guidance on the appropriate use of live facial recognition by police departments was decried by privacy campaigners as “a hammer blow for privacy and liberty.” Plans have also been proposed in the EU to make it easier for police forces across the continent to share their facial recognition data with one another.

Even so, recent polls also suggest a deep well of public scepticism in the US, UK and Europe about facial recognition. A survey of EU citizens, for example, found that fewer than 20% were comfortable with sharing their biometric data with public authorities, while another poll conducted last year in the US found that 57% thought that the use of such systems would have a negligible impact in reducing crime rates. In the UK, another survey by the Ada Lovelace Institute in 2019 found that while almost half of respondents agreed with its use in daily law enforcement tasks, that support was contingent on the passage of appropriate legal frameworks. Clear majorities, however, were uncomfortable with its use in shops or on public transport.

“I think there is a growing revulsion around the world about face recognition technology,” says Schwartz, a wave that has seen big tech companies such as Facebook, Microsoft and Amazon publicly abandon selling systems to police and using them internally. The pervasive use of facial recognition by authoritarian states like China and Russia will also have a detrimental impact on the popularity of the technology, he adds. “We are optimistic that… it’s a problem we can solve,” says Schwartz. “It’s just a question of having the political will to pass the right laws.”

Mau agrees. While Ignotum’s futuristic poncho wouldn’t look completely out of the ordinary on the streets of Shoreditch or in a queue for Berghain, he doubts the item would be able to resist the advances that are continually made in the detection capabilities of facial recognition algorithms. The ability of the poncho to resist analysis, for example, could easily be compromised if an AI relied on parsing images into diamond grids rather than the conventional rectangular format. As such, says Mau, “I don’t think that’s a race you can win in computing, between hardware and software.”

That’s why it’s so important for effective legal frameworks to be passed and enforced. “I don’t think we will solve this problem with products,” says Mau. While concepts like Ignotum play a valuable role in reminding the public how biometric surveillance can radically reduce the ability of the individual to be just another face in the crowd, ultimately, he says, there “needs to be some kind of legal structure”.

Read more: Why the UK government needs to take police facial recognition seriously

[ad_2]

Source link