“Taking our data without notice isn’t convenient, it’s creepy.”
Everybody’s favourite big tech giant, Amazon, is facing yet another class-action lawsuit, this time for allegedly deploying biometric recognition technologies to monitor Amazon Go customers in its New York City outlets without their knowledge. According to the lawsuit, Amazon violated a 2021 NYC law which mandates that all business establishments that track their customers’ biometric information, including retail stores, must at least inform their customers that they are doing so. Amazon apparently didn’t.
“The lawsuit was filed in the U.S. District Court for the Southern District of New York on behalf of Brooklyn resident Alfredo Rodriguez Perez and a proposed class of tens of thousands of Amazon Go customers,” says the privacy advocacy group Stop Surveillance Technology Oversight Project. “The complaint claims that from January 2022 to March 13, 2023 Amazon failed to post any sign stating that Amazon Go stores collect biometric data, including for over a month after Mr. Perez told Amazon it violated New York City law by failing to do so.”
Amazon opened its first Go stores in New York in 2019 and now has ten stores in the city, all in Manhattan. The stores operate on the premise that customers can walk in, take whatever products they want off the shelves and leave without checking out. The company monitors visitors’ actions and charges their accounts when they leave the store. It is the epitome of tech-enabled convenience, but not all New Yorkers are willing to trade in their most personal data for minimal gains in convenience.
Amazon only recently erected signs informing New York customers of its use of biometric recognition technology, more than a year after the disclosure law went into effect, claims the lawsuit. The company has also allegedly begun posting signs claiming that Amazon only harvests biometric data from customers who opt into the company’s palm scanner program. However, the plaintiffs in the lawsuit claim that the company was collecting biometric data on all customers, such as their body shape and size, including those who refuse to use the palm scanner.
“New Yorkers shouldn’t have to worry that we’ll have our biometric data secretly tracked anytime we want to buy a bag of chips,” said Surveillance Technology Oversight Project Executive Director Albert Fox Cahn. “Taking our data without notice isn’t convenient, it’s creepy. We have a right to know when our biometric data is used, and it’s appalling that one of the world’s largest companies could so flagrantly disregard the law. It’s stunning to think just how many New Yorkers’ data has already been compromised, and without them ever knowing it.”
New York is one of a small but growing handful of US cities that have passed biometric laws over the past couple of years. So far, three states — Texas, Washington and Illinois — have passed standalone biometrics laws, though many more are expected to follow do so this year. In Illinois alone, more than 1,000 class action lawsuits have been filed under the state’s Biometric Information Privacy Act (BIPA). The public is increasingly attuned to biometric privacy risks and the litigation costs are growing for companies, notes the Cybersecurity Law Report:
BIPA applies to companies that collect, capture, purchase, obtain, disclose, or disseminate “biometrics identifiers,” defined as “a retina or iris scan, fingerprint, voiceprint, or a scan of hand or face geometry,” or “biometric information,” defined as any “information, regardless of how it is captured, converted, stored, or shared, based on an individual’s biometric identifier used to identify an individual.”
Companies subject to the law must:
• have a publicly available written biometrics policy;
• obtain an individual’s written consent prior to collection; and
• otherwise comply with the statutory restrictions on biometric use, sale and
storage.The risks of non-compliance are steep: BIPA permits actual damages or liquidated damages
of $1,000 for each negligent violation and $5,000 for each reckless or intentional violation, plus attorneys’ fees and costs and injunctive relief.The pace of BIPA litigation and settlements has been relentless. Last year, Facebook settled a
BIPA class action over its photo-tagging feature for $650 million, and TikTok settled for $92
million over face detection in videos. Microsoft, Google, IBM and others have not escaped
scrutiny.
Meanwhile, in Europe…
On the other side of the Atlantic, the backlash against the growing use of biometric surveillance systems, by companies and governments alike, is also growing. In the UK, the Information Commissioner (ICO) last month reprimanded the North Ayrshire Council for using facial recognition technology in secondary schools “in a manner that is likely to have infringed data protection law.” Why it took a year-and-a-half for the ICO to reach this conclusion, despite a sustained public backlash against the move, is anyone’s guess.
In October 2021, nine schools in the Scottish region of North Ayrshire started using facial recognition systems as a form of contactless payment in cashless canteens (cafeterias in the US), until a public outcry put paid to the pilot scheme. Yet as I reported months later, rather than shelving the idea, the Tory government actually doubled down:
According to a new report in the Daily Mail, almost 70 schools have signed up for a system that scans children’s faces to take contactless payments for canteen lunches while others are reportedly planning to use the controversial technology to monitor children in exam rooms. This time round, however, the government didn’t even bother informing the UK Biometrics and Surveillance Camera Commissioner Fraser Sampson of the plans being drafted by the Department for Education (DfE).
In Belgium, a petition filed with the country’s Parliament and signed by organizations including the Belgium Human Rights League warns that “Facial recognition threatens our freedoms.” The petition calls for a ban on facial recognition in public places as well as its use by authorities in identifying people.
The use of this technology on our streets would make us permanently identifiable and monitored. This amounts to giving the authorities the power to identify the entirely of its population in the public space, which constitutes an invasion of privacy and the right to the anonymity of citizens.
The petition warns that facial recognition will harm marginalized groups by facilitating yet more systemic discrimination and bias. At the same time, data breeches and leaks risk exposing citizens’ most private data.
EU Society “Not Ready for Facial Recognition”
The EU may have set the global standard for data protection, but it too is pushing the ethical boundaries when it comes to collecting and storing citizens’ biometric data. It is in the process of building one of the largest facial recognition systems on planet Earth as part of plans to modernize policing across the 27-member bloc. This data could end up being shared with the US. As I reported last July, the US is planning to trade its citizens’ biometric data — some of it collected without consent — for the biometric data harvested by its “partner” governments in Europe and beyond.
Privacy advocates have called for an outright ban on biometric surveillance technologies due to the threat they pose to civil liberties. They include Wojtek Wiewiorowski, who leads the EU’s in-house data protection agency, EPDS, which is supposed to ensure the EU is complying with its own strict privacy rules. In November 2021, Wiewiorowski warned that European society is not ready for facial recognition technology: the use of the technology, he said, would “turn society, turn our citizens, turn the places we live, into places where we are permanently recognizable … I’m not sure if we are really as a society ready for that.”
In a more recent interview, with EU observer, the EU’s data protection supervisor voiced concerns that the EU is trampling on its own values and people’s rights to privacy and data protection as it expands its data dragnet in areas such as migration and law enforcement.
In France, Emmanuel Macron’s broadly detested government is pushing for the introduction of AI-empowered surveillance systems for the 2024 Paris Olympics. In its passage through the Senate’s plenary session in January, an amendment to include facial recognition was rejected by the Senate’s law committee. That is the good news. However, Amnesty International warned this week that if the proposed bill is approved, it will legalize the use of a pervasive AI-powered mass video surveillance system for the first time in the history of France — and the European Union:
This colossal surveillance architecture, according to French lawmakers, is “experimental” and will be used to ensure safety and security during the games. Amnesty International fears, however, that this bill will expand police powers by broadening the government’s arsenal of surveillance equipment, permanently.
“Re-stocking security apparatus with AI-driven mass surveillance is a dangerous political project which could lead to broad violations of human rights. Every action in a public space will get sucked into a dragnet of surveillance infrastructure, undermining fundamental civic freedoms,” said Agnes Callamard, Amnesty International’s Secretary General.
“French lawmakers have failed to prove that this legislation meets the principles of necessity and proportionality, which are absolutely fundamental to ensuring security and surveillance measures do not threaten the rights to freedom of assembly and association, privacy, and non-discrimination. While the need for security during the event is understandable, international human rights law still applies to the Olympics. In their existing format, these blanket applications of AI-driven mass surveillance are in complete violation of the right to privacy and other rights.”
Of course, the new law is unlikely to apply just to the Paris Olympics. A letter sent by MEPs to France’s National Assembly warns that the law was written so as to apply broadly to “sporting, recreational or cultural events, which, by their scale or their circumstances, are particularly exposed to the risk of acts of terrorism or serious threat to the safety of persons.” This, it added, would be at odds with language used in a precursor to the EU’s forthcoming AI Act, which prohibits the use of automated analysis of human features, biometric and behavioural signals.
This is a key point: in the fully digitised world that is fast taking shape around us, many of the decisions or actions taken by corporations, central banks and local, regional or national authorities that affect us will be fully automated; no human intervention will be needed. That means that trying to get those decisions or actions reversed or overturned is likely to be a Kafkaesque nightmare that even Kafka may have struggled to foresee.
As I warn in my book Scanned, these digital systems of surveillance and control, if allowed to take root, will represent one of the biggest collective trade offs of modern history. We are essentially being asked — or rather, not asked — to trade in a system, however imperfect and degraded, of rights, laws and freedoms for one of centralised, automated, top-down technocratic control. The irony is that this is all happening at a time when control is rapidly slipping away from the grasp of many governments in the West, as we are already seeing play out in France.
In his book Endspiel des Kapitalismus (Endgame for Capitalism) the German economist Norbert Häring argues that the elites are fully aware that the Ponzi scheme of late-stage financial capitalism, built upon the foundations of unpayable levels of public and private debt and unsustainable levels of resource extraction, will soon crash. Before that occurs, they are trying to usher in a neofeudal society, in which they can continue to hold power and wealth. The digital systems of surveillance and control that will enable this transition, including CBDCs, are almost in place.