In Today's OpEd - FaceFirst CEO, Peter Trepp Responds to Clearview AI Controversy


About a week ago, the New York Times published an article about a largely unknown facial recognition company called Clearview AI entitled, “The Secretive Company That Might End Privacy as We Know It”. Articles about facial recognition, good or bad, usually attract the attention of readers but this one will get heightened attention and here’s why:

Clearview is one of many companies that have scraped social media websites to build a large database of faces that it uses to search with a probe image. For law enforcement, for example, this means that an image of a suspect (the probe image) can be compared to Clearview’s database of 3 billion (claimed) images in order to find a possible match and then link it back to the source of the database image (e.g. from Twitter or Facebook). How did they get 3 billion images from social media? Hint: It wasn’t by asking for permission from either the platforms or their users.

It is not trivial scraping images from the big guys; in fact, it’s challenging. Over the years, social media companies have diligently implemented technical roadblocks to prevent data scraping. Nevertheless, savvy engineers find their way around such obstacles. While this endeavor was easier in the past, there is little stopping determined hackers from collecting this information, even today.

Clearview and its investors, including Peter Thiel (who sits on the board of Facebook), are building a company that is not concerned with abiding by the privacy policies and/or laws that are either in place or coming soon. That’s fine, but both policies and laws will matter a lot less when the court of public opinion is called to order. Moreover, Clearview should have received a letter from Twitter by now asking them to cease and desist for having violated the social network’s policies and demanding that they delete any scrapped images. I don’t think Clearview with comply because (1) the founder, Mr. Ton-That, is a technologist, not a Jurist Doctorate, and (2) the data is probably “offshore” and arguably out of reach of the laws of the U.S.

So, what’s wrong with Clearview’s business model and why have they breached our social contract? Setting aside the constraints of legislation and social media policies for a moment, as Mr. Ton-That has apparently done, let’s consider the perspective of social acceptability and business ethics (assuming these still exist today). The key issues are:

1. Contributing one’s image to a social media platform and choosing to make it “public” does not automatically give anyone the right to repurpose that image regardless of how clever they were in obtaining it. It’s possible that your image was collected at a time when the policies of your social media platform were not fully in place yet; that shouldn’t matter.

2. Transparent collection and use of data are essential to the foundation of a platform that seeks to use its product as a commercial application. This point should matter both to the collector/seller of the information and the commercial consumer of the application. Customers should consider the well-known source of data and carefully evaluate the social impact to engaging with Clearview.

3. Users should be able to easily search and/or opt-out of databases that contain their image(s). Users should not be required to provide burdensome personal information in order to opt-out.

Here’s the irony. The very people we entrust to uphold the law may be the one’s breaking it. It’s been reported that 600 law enforcement agencies are using Clearview. However, as long as they aren’t technically and/or currently breaking the law, maybe there is no harm? There are at least two problems with that argument. First, there is harm in tapping a database that is likely known to have been built illegally. Second, there will most certainly be significant public backlash, class action lawsuits and probably some action from Washington DC.

It’s widely known that legislation typically trails behind most any technology curve, and for good reason. The job of a law maker is not to be pro-active and predict the future as much as it is to be reactive and prevent current events from blowing up and causing certain disaster. It’s hard to predict the future and have deep domain expertise on all fields, especially highly technical ones. Predictive powers or not, law makers are working on new legislation. I’ve had the privilege to visit law makers on Capitol Hill and provide input on a federal bipartisan bill for facial recognition that is being considered now. I’m in support of the proposed bill as it is worded today, but I’m under no illusion that laws will stop Clearview from continuing. What sets FaceFirst aside from Clearview is that we will never scrap or otherwise illegally obtain images to build databases. Our customers expect that from us, and we will continue to deliver on that promise.

Facial recognition is clearly a very powerful tool and is becoming more widely used every day on its way to ubiquity. However, I don’t think we’re entering a dystopian world any time soon. From my seat, I am extremely excited about unlocking a world of helpful, personalized, efficient applications that will deliver real value to consumers. In the meantime, FaceFirst is striking a balance between delivering a high-quality product in a legal, ethical and socially responsible form.

In summary, I hope that Clearview and their customers are thinking carefully about the future and the impact to society. Clearly, facial recognition is coming in many forms, some we may not like while others we will wonder how we got by before it was introduced. Buckle up.


'The New Rules of Consumer Privacy'
by FaceFirst CEO Peter Trepp


In The New Rules of Consumer Privacy: Building Loyalty with Connected Consumers in the Age of Face Recognition and AI, FaceFirst CEO and author Peter Trepp has devised a set of rules that will help companies uphold consumers' privacy without sacrificing their security and convenience. By following these rules, brands can create a win-win scenario that will maximize revenue, reduce crime, provide consumers with the best experience possible and ensure that consumers' privacy is reasonably protected. Learn more about the book here
Or order it here!

 



Clearview AI's Scraping Images at the Top of the News
'Facial recognition datasets & controversies drive biometrics news last week'


Clearview AI Worsening Public Anxiety About Biometrics 7 Data Privacy

Facial recognition and controversy around the technology were the theme common to most of the past week’s top stories on Biometric Update. After a few weeks of relative calm for facial biometrics, the biggest stories about court cases, regulation, and market growth were all focused on the same modality; and then there was Clearview AI.

Mastercard’s certification of fingerprint payment card technology remains our top story for the second week in a row, emphasizing the importance of payment card certification to the biometrics industry.

Lawsuits related to facial recognition and the Biometric Information Privacy Act (BIPA) of Illinois generated a pair of the top stories of the week on Biometric Update, but each with a twist on the all-to-common stories of arguments about standing and breaches of informed consent rules. Clearview AI has managed to somehow further worsen public anxiety about biometrics and data privacy, and along with IBM has been slapped with a BIPA suit over image acquisition practices. Both companies scraped images from social media to train facial recognition, but IBM did so to address the demographic disparities found in most facial biometric algorithms, while Clearview seems not to have had such a laudable intention.

The reaction of New Jersey’s Attorney General to the evolving Clearview scandal was also among the two widely-read stories of the week, as they barred law enforcement agencies in the state from working with the company.
biometricupdate.com

The Secretive Company That Might End Privacy as We Know It
A little-known start-up helps law enforcement match photos of unknown people to their online images — and “might lead to a dystopian future or something,” a backer says.

Clearview AI, devised a groundbreaking facial recognition app. You take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared. The system — whose backbone is a database of more than three billion images that Clearview claims to have scraped from Facebook, YouTube, Venmo and millions of other websites — goes far beyond anything ever constructed by the United States government or Silicon Valley giants.

Federal and state law enforcement officers said that while they had only limited knowledge of how Clearview works and who is behind it, they had used its app to help solve shoplifting, identity theft, credit card fraud, murder and child sexual exploitation cases.

Without public scrutiny, more than 600 law enforcement agencies have started using Clearview in the past year, according to the company, which declined to provide a list. The computer code underlying its app, analyzed by The New York Times, includes programming language to pair it with augmented-reality glasses; users would potentially be able to identify every person they saw. The tool could identify activists at a protest or an attractive stranger on the subway, revealing not just their names but where they lived, what they did and whom they knew.

And it’s not just law enforcement: Clearview has also licensed the app to at least a handful of companies for security purposes. nytimes.com


New Jersey Bars Police From Using Clearview Facial Recognition App
Reporting about the powerful tool with a database of three billion photos “troubled” the state’s attorney general, who asked for an inquiry into its use. Gurbir S. Grewal, New Jersey’s attorney general, told state prosecutors in all 21 counties on Friday that police officers should stop using the Clearview AI app.

New Jersey police officers are now barred from using a facial recognition app made by a start-up that has licensed its groundbreaking technology to hundreds of law enforcement agencies around the country.

The New York Times reported last week that Clearview had amassed a database of more than three billion photos across the web — including sites like Facebook, YouTube, Twitter and Venmo. The vast database powers an app that can match people to their online photos and link back to the sites the images came from. “Until this week, I had not heard of Clearview AI,” Mr. Grewal said in an interview. “I was troubled.

In a promotional video posted to its website this week, Clearview included images of Mr. Grewal because the company said its app had played a role last year in Operation Open Door, a New Jersey police sting that led to the arrest of 19 people accused of being child predators.

“I was surprised they used my image and the office to promote the product online,” said Mr. Grewal, who confirmed that Clearview’s app had been used to identify one of the people in the sting. “I was troubled they were sharing information about ongoing criminal prosecutions.”

Mr. Grewal’s office sent Clearview a cease-and-desist letter that asked the company to stop using the office and its investigations to promote its products.
nytimes.com

EU drops idea of facial recognition ban in public areas: paper
The European Union has scrapped the possibility of a ban on facial recognition technology in public spaces.

Facial recognition artificial intelligence has sparked a global debate about the pros and cons of a technology widely used by law enforcement agencies but abused by authoritarian regimes for mass and discriminatory surveillance.

The EU's revised proposal, part of a package of measures to address the challenges of AI, could still be tweaked as the commission is currently seeking feedback before it presents its plan on Feb. 19.

The U.S. government earlier this month unveiled its own AI regulatory guidelines aimed at limiting authorities’ overreach and urged Europe to avoid aggressive approaches.

Microsoft President Brad Smith has said that a facial recognition AI ban is akin to using a cleaver instead of a scalpel to solve potential problems while Alphabet CEO Sundar Pichai has voiced support.
reuters.com

Moscow Launches World's Largest Live Facial Biometrics Surveillance Network

Aiding Police to ID Terrorists & Criminals in Seconds & Locating & Capturing Them in Hours

Video surveillance analytics including biometric facial recognition provided by NtechLab has been launched at scale in Moscow as of the first of January.

Services provided by NtechLab include detection of a certain face in the frame (detection) and recognition of individuals from a database, which are followed by notifications to law enforcement. With the final identification decisions only being made by law enforcement officials and in accordance with the law.

The product made for Moscow’s extensive CCTV system is capable of running simultaneously on hundreds of thousands of cameras. The database used with the system will be a watchlist of suspects, and that the FindFace Security mobile app is used for alerts.

The CEO notes that companies like Clearview AI “that really do not care for privacy rights” are harming the industry’s reputation.

The system is currently up to 175,000 cameras, with 3.3 billion rubles ($53.3 million) allocated to hardware, according to a state purchase database.
biometricupdate.com

Editor's Note: Around the world, Clearview AI is upsetting industry experts.