banner
Previous Page
PCLinuxOS Magazine
PCLinuxOS
Article List
Disclaimer
Next Page

Facial Recognition: The New Digital Menace


Editor's Note: These three articles are all by the Electronic Frontier Foundation (EFF), and appeared on their site during the month of October. They are reprinted here under a Creative Commons Attribution License.

Face Recognition Isn't Just Face Identification and Verification: It's Also Photo Clustering, Race Analysis, Real-time Tracking, and More

By Bennett Cyphers, Adam Schwartz, and Nathan Sheard
Permalink



Governments and corporations are tracking how we go about our lives with a unique marker that most of us cannot hide or change: our own faces. Across the country, communities are pushing back with laws that restrain this dangerous technology. In response, some governments and corporations are claiming that these laws should only apply to some forms of face recognition, such as face identification, and not to others, such as face clustering.

We disagree. All forms of face recognition are a menace to privacy, free speech, and racial justice. This post explores many of the various kinds of face recognition, and explains why all must be addressed by laws.


What Is Face Recognition?

At the most basic level, face recognition technology takes images of human faces and tries to extract information about the people in them.

Here's how it usually works today:

First, the image is automatically processed to identify what is and is not a face. This is often called "face detection." This is a prerequisite for all of the more sophisticated forms of face recognition we discuss below. In itself, face detection is not necessarily harmful to user privacy. However, there is significant racial disparity in many face detection technologies.

Next, the system extracts features from each image of a face. The raw image data is processed into a smaller set of numbers that summarize the differentiating features of a face. This is often called a "faceprint."

Faceprints, rather than raw face images, can be used for all of the troubling tasks described below. A computer can compare the faceprint from two separate images to try and determine whether they're the same person. It can also try to guess other characteristics (like sex and emotion) about the individual from the faceprint.


Face Matching

The most widely deployed class of face recognition is often called "face matching." It tries to match two or more faceprints to determine if they are the same person.

Any face recognition system used for "tracking", "clustering", or "verification" of an unknown person can easily be used for "identification.".

Face matching can be used to link photographs of unknown people to their real identities. This is often done by taking a faceprint from a new image (e.g. taken by a security camera) and comparing it against a database of "known" faceprints (e.g. a government database of ID photos). If the unknown faceprint is similar enough to any of the known faceprints, the system returns a potential match. This is often known as "face identification."

Face matching can also be used to figure out whether two faceprints are from the same face, without necessarily knowing whom that face belongs to. For example, a phone may check a user's face to determine whether it should unlock, often called "face verification." Also, a social media site may scan through a user's photos to try to determine how many unique people are present in them, though it may not identify those people by name, often called "face clustering." This tech may be used for one-to-one matches (are two photographs of the same person?), one-to-many matches (does this reference photo match any one of a set of images?), or many-to-many matches (how many unique faces are present in a set of images?). Even without attaching faces to names, face matching can be used to track a person's movements in real-time, for example, around a store or around a city, often called "face tracking."

All forms of face matching raise serious digital rights concerns, including face identification, verification, tracking, and clustering. Lawmakers must address them all. Any face recognition system used for "tracking", "clustering", or "verification" of an unknown person can easily be used for "identification" as well. The underlying technology is often exactly the same. For example, all it takes is linking a set of "known" faceprints to a cluster of "unknown" faceprints to turn clustering into identification.

Even if face identification technology is never used, face clustering and tracking technologies can threaten privacy, free speech, and equity. For example, police might use face-tracking technology to follow an unidentified protester from a rally to their home or car, and then identify them with an address or license plate database. Or police might use face clustering technology to create a multi-photo array of a particular unidentified protester, and manually identify the protester by comparing that array to a mugshot database, where such manual identification would have been impossible based on a single photo of the protester.


Accuracy, Error, and Bias

In 2019, Nijeer Parks was wrongfully arrested after being misidentified by a facial recognition system. Despite being 30 miles away from the scene of the alleged crime, Parks spent 10 days in jail before police admitted their mistake.

Nijeer Parks is at least the third person to be falsely arrested due to faulty face recognition tech. It's no coincidence that all three people were Black men. Facial recognition is never perfect, but it is alarmingly more error-prone when applied to anyone who is not a white and cisgender man. In a pioneering study from 2018, Joy Buolamwini and Dr. Timnit Gebru showed that face identification systems misidentified women of color at more than 40 times the rate of white men. More recently, NIST testing of various state-of-the-art face recognition systems confirmed a broad, dramatic trend of disparate "false positive" rates across demographics, with higher error rates for faces that were not white and male.

--------------------------------------------------------------------------------------

Despite being 30 miles away from the scene of the alleged crime, Parks spent 10 days in jail before police admitted their mistake.

---------------------------------------------------------------------------------------

Furthermore, face identification systems that may perform better on laboratory benchmarks-- for example, attempting to identify well-lit headshots--are usually much less accurate in the real world. When that same technology is given a more realistic task, like identifying people walking through an airport boarding gate, it performs much less well.

For many reasons, widespread deployment of facial identification--even if it was accurate and unbiased--is incompatible with a free society. But the technology today is far from accurate, and it is deeply biased in ways that magnify the existing systematic racism in our criminal justice system.

We expect that researchers will find the same kinds of unacceptable errors and bias in face tracking and clustering, as has already been found in face identification. Which is one more reason why privacy laws must address all forms of face recognition.


Another Form of Face Recognition: Face Analysis

Face recognition has many applications beyond matching one faceprint to another. It is also used to try to guess a person's demographic traits, emotional state, and more, based on their facial features. A burgeoning industry purports to use what is often called "face analysis" or "face inference" to try to extract these kinds of auxiliary information from live or recorded images of faces. Face analysis may be used in combination with other technologies, like eye tracking, to examine the facial reaction to what you are looking at.


Demographic Analysis

Some vendors claim they can use face recognition technologies to assign demographic attributes to their targets, including gender, race, ethnicity, sexual orientation, and age.

It's doubtful that such demographic face analysis can ever really "work." It relies on the assumption that differences in the structure of a face are perfect reflections of demographic traits, when in many cases that is not true. These demographics are often social constructs and many people do not fit neatly under societal labels.

When it does "work", at least according to whomever is deploying it, demographic face inference technology can be extremely dangerous to marginalized groups. For example, these systems allow marketers to discriminate against people on the basis of gender or race. Stores might attempt to use face analysis to steer unidentified patrons towards different goods and discounts based on their gender or emotional state--a misguided attempt whether it succeeds or fails. At the horrific extreme, automatic demographic inference can help automate genocide.

These technologies can also harm people by not working. For example, "gender recognition" will misidentify anyone who does not present traditional gender features, and can harm transgender, nonbinary, gender non-conforming, and intersex people. That's why some activists are campaigning to ban automated recognition of gender and sexual orientation.


Emotion Analysis

Face analysis also purportedly can identify a person's emotions or "affect," both in real-time and on historical images. Several companies sell services they claim can determine how a person is feeling based on their face.

--------------------------------------------------------------------------------------

This technology is pseudoscience: at best, it might learn to identify some cultural norms. But people often express emotions differently, based on culture, temperament, and neurodivergence.

---------------------------------------------------------------------------------------

This technology is pseudoscience: at best, it might learn to identify some cultural norms. But people often express emotions differently, based on culture, temperament, and neurodivergence. Trying to uncover a universal mapping of "facial expression" to "emotion" is a snipe hunt. The research institute AI Now cited this technology's lack of scientific basis and potential for discriminatory abuse in a scathing 2019 report, and called for regulators to ban its use for important decisions about human lives.

Despite the lack of scientific backing, emotion recognition is popular among many advertisers and market researchers. Having reached the limits of consumer surveys, these companies now seek to assess how people react to media and advertisements by video observation, with or without their consent.

Even more alarmingly, these systems can be deployed to police "pre-crime"--using computer-aided guesses about mental state to scrutinize people who have done nothing wrong. For example, the U.S. Department of Homeland Security spent millions on a project called "FAST", which would use facial inference, among other inputs, to detect "mal-intent" and "deception" in people at airports and borders. Face analysis can also be incorporated into so-called "aggression detectors," which supposedly can predict when someone is about to become violent. These systems are extremely biased and nowhere near reliable, yet likely will be used to justify excessive force or wrongful detention against whomever the system determines is "angry" or "deceptive." The use of algorithms to identify people for detention or disciplinary scrutiny is extremely fraught, and will do far more to reinforce existing bias than to make anyone safer.

Some researchers have even gone as far as to suggest that "criminality" can be predicted from one's face. This is plainly not true. Such technology would unacceptably exacerbate the larger problems with predictive policing.


Take Action

Mitigating the risks raised by the many forms of face recognition requires each of us to be empowered as the ultimate decision-maker in how our biometric data is collected, used, or shared. To protect yourself and your community from unconsented collection of biometric data by corporations, contact your representatives and tell them to join Senators Jeff Merkley and Bernie Sanders in advocating for a national biometric information privacy act.

Government use of face recognition technology is an even greater menace to our essential freedoms. This is why government agencies must end the practice, full stop. More than a dozen communities from San Francisco to Boston have already taken action by banning their local agencies from utilizing the technology. To find out how you can take steps today to end government use of face recognition technology in your area, visit EFF's About Face resource page.

For a proposed taxonomy of the various kinds of face recognition discussed in this post, check out this list of commonly used terms.


Face Recognition Technology: Commonly Used Terms
By Adam Schwartz, Nathan Sheard, and Bennett Cyphers
Permalink



As face recognition technology evolves at a dizzying speed, new uses and terminologies seem to develop daily. On this page, we attempt to define and disambiguate some of the most commonly used terms.

For more information on government use of face recognition and how to end it in your community, visit EFF's About Face resource page.

Face detection: Determines whether an image includes a human face. Some government agencies use face detection to aid in obscuring identifiable faces before releasing video footage in response to requests for public records. As a result, many bans on government use of face recognition technology specifically exclude face detection for this purpose, provided that no information about the faces is collected or stored. Generally, this use does not raise significant privacy concerns.

Face recognition: Any collection and processing of faceprints, including both face matching and face analysis (two terms defined below). Face recognition raises significant digital rights concerns.

Faceprinting: A fundamental step in the process of face recognition, faceprinting is the automated analysis and translation of visible characteristics of a face into a unique mathematical representation of that face. Both collection and storage of this information raise privacy and safety concerns.

Face matching: Any comparison of two or more faceprints. This includes face identification, face verification, face clustering, and face tracking (four terms defined below).

  • Face identification: Compares (i) a single faceprint of an unknown person to (ii) a set of faceprints of known people. The goal is to identify the unknown person. Face identification may yield multiple results, sometimes with a "confidence" indicator showing how likely the system determines the returned image matches the unknown image.

  • Face verification: Compares (i) a single faceprint of a person seeking verification of their authorization to (ii) one or more faceprints of authorized individuals. The verified person might or might not be identified as a specific person; a system may verify that two faceprints belong to the same person without knowing who that person is. Face verification may be used to unlock a phone or to authorize a purchase.

  • Face clustering: Compares all the faceprints in a collection of images to one another, in order to group the images containing a particular person or group of people. The clustered people might or might not then be identified as known individuals. For example, each of the people in a library of digital photos (whether a personal album or a police array of everyone at a protest) could have their various pictures automatically clustered into a discrete set.

  • Face tracking: Uses faceprints to follow the movements of a particular person through a physical space covered by one or more surveillance cameras, such as the interior of a store or the exterior sidewalks in a city's downtown. The tracked person might or might not be identified. The tracking might be real-time or based on historical footage.

Face analysis, also known as face inference: Any processing of a faceprint, without comparison to another individual's faceprint, to learn something about the person from whom the faceprint was extracted. Face analysis by itself will not identify or verify a person. Some face analysis purports to draw inferences about a person's demographics (such as race or gender), emotional or mental state (such as anger), behavioral characteristics, and even criminality.

For more information about the various kinds of face recognition, check out this more detailed post.


Resisting the Menace of Face Recognition
By Adam Schwartz Permalink



Face recognition technology is a special menace to privacy, racial justice, free expression, and information security. Our faces are unique identifiers, and most of us expose them everywhere we go. And unlike our passwords and identification numbers, we can't get a new face. So, governments and businesses, often working in partnership, are increasingly using our faces to track our whereabouts, activities, and associations.

Fortunately, people around the world are fighting back. A growing number of communities have banned government use of face recognition. As to business use, many communities are looking to a watershed Illinois statute, which requires businesses to get opt-in consent before extracting a person's faceprint. EFF is proud to support laws like these.


Face Recognition Harms

Let's begin with the ways that face recognition harms us. Then we'll turn to solutions.


Privacy

Face recognition violates our human right to privacy. Surveillance camera networks have flooded our public spaces. Face recognition technologies are more powerful by the day. Taken together, these systems can quickly, cheaply, and easily ascertain where we've been, who we've been with, and what we've been doing. All based on a unique marker that we cannot change or hide: our own faces.

In the words of a federal appeals court ruling in 2019, in a case brought against Facebook for taking faceprints from its users without their consent:

    Once a face template of an individual is created, Facebook can use it to identify that individual in any of the other hundreds of millions of photos uploaded to Facebook each day, as well as determine when the individual was present at a specific location. Facebook can also identify the individual's Facebook friends or acquaintances who are present in the photo. ... [I]t seems likely that a face-mapped individual could be identified from a surveillance photo taken on the streets or in an office building.

Government use of face recognition also raises Fourth Amendment concerns. In recent years, the U.S. Supreme Court has repeatedly placed limits on invasive government uses of cutting-edge surveillance technologies. This includes police use of GPS devices and cell site location information to track our movements. Face surveillance can likewise track our movements.


Racial Justice

Face recognition also has an unfair disparate impact against people of color.

Its use has led to the wrongful arrests of at least three Black men. Their names are Michael Oliver, Nijeer Parks, and Robert Williams. Every arrest of a Black person carries the risk of excessive or even deadly police force. So, face recognition is a threat to Black lives. This technology also caused a public skating rink to erroneously expel a Black patron. Her name is Lamya Robinson. So, face recognition is also a threat to equal opportunity in places of public accommodation.

These cases of "mistaken identity" are not anomalies. Many studies have shown that face recognition technology is more likely to misidentify people of color than white people. A leader in this research is Joy Buolamwini.

Even if face recognition technology was always accurate, or at least equally inaccurate across racial groups, it would still have an unfair racially disparate impact. Surveillance cameras are over-deployed in minority neighborhoods, so people of color will be more likely than others to be subjected to faceprinting. Also, history shows that police often aim surveillance technologies at racial justice advocates.

Face recognition is just the latest chapter of what Alvaro Bedoya calls "the color of surveillance." This technology harkens back to "lantern laws," which required people of color to carry candle lanterns while walking the streets after dark, so police could better see their faces and monitor their movements.


Free Expression

In addition, face recognition chills and deters our freedom of expression.

The First Amendment protects the right to confidentiality when we engage in many kinds of expressive activity. These include anonymous speech, private conversations, confidential receipt of unpopular ideas, gathering news from undisclosed sources, and confidential membership in expressive associations. All of these expressive activities depend on freedom from surveillance because many participants fear retaliation from police, employers, and neighbors. Research confirms that surveillance deters speech.

Yet, in the past two years, law enforcement agencies across the country have used face recognition to identify protesters for Black lives. These include the U.S. Park Police, the U.S. Postal Inspection Service, and local police in Boca Raton, Broward County, Fort Lauderdale, Miami, New York City, and Pittsburgh. This shows, again, the color of surveillance.

Police might also use face recognition to identify the whistleblower who walked into a newspaper office, or the reader who walked into a dissident bookstore, or the employee who walked into a union headquarters, or the distributor of an anonymous leaflet. The proliferation of face surveillance can deter all of these First Amendment-protected activities.


Information Security

Finally, face recognition threatens our information security.

Data thieves regularly steal vast troves of personal data. These include faceprints. For example, the faceprints of 184,000 travellers were stolen from a vendor of U.S. Customs and Border Protection.

Criminals and foreign governments can use stolen faceprints to break into secured accounts that the owner's face can unlock. Indeed, a team of security researchers did this with 3D models based on Facebook photos.


Face Recognition Types

To sum up: face recognition is a threat to privacy, racial justice, free expression, and information security. However, before moving on to solutions, let's pause to describe the various types of face recognition.

Two are most familiar. "Face identification" compares the faceprint of an unknown person to a set of faceprints of known people. For example, police may attempt to identify an unknown suspect by comparing their faceprint to those in a mugshot database.

"Face verification" compares the faceprint of a person seeking access, to the faceprints of people authorized for such access. This can be a minimally concerning use of the technology. For example, many people use face verification to unlock their phones.

There's much more to face recognition. For example, face clustering, tracking, and analysis do not necessarily involve face identification or verification.

"Face clustering" compares all faceprints in a collection of images to one another, to group the images containing a particular person. For example, police might create a multi-photo array of an unidentified protester, then manually identify them with a mugshot book.

"Face tracking" follows the movements of a particular person through a physical space covered by surveillance cameras. For example, police might follow an unidentified protester from a rally to their home or car, then identify them with an address or license plate database.

"Face analysis" purports to learn something about a person, like their race or emotional state, by scrutinizing their face. Such analysis will often be wrong, as the meaning of a facial characteristic is often a social construct. For example, it will misgender people who are transgender or nonbinary. If it "works," it may be used for racial profiling. For example, a Chinese company claims it works as a "Uighur alarm." Finally, automated screening to determine whether a person is supposedly angry or deceptive can cause police to escalate their use of force, or expand the duration and scope of a detention.

Legislators must address all forms of face recognition: not just identification and verification, but also clustering, tracking, and analysis.


Government Use of Face Recognition

EFF supports a ban on government use of face recognition. The technology is so destructive that government must not use it at all.

EFF has supported successful advocacy campaigns across the country. Many local communities have banned government use of face recognition, from Boston to San Francisco. The State of California placed a three-year moratorium on police use of face recognition with body cameras. Some businesses have stopped selling face recognition to police.

We also support a bill to end federal use of face recognition. If you want to help stop government use of face recognition in your community, check out EFF's "About Face" toolkit.


Corporate Use of Face Recognition

The Problem

Corporate use of face recognition also harms privacy, racial justice, free expression, and information security.

Part of the problem is at brick-and-mortar stores. Some use face identification to detect potential shoplifters. This often relies on error-prone, racially biased criminal justice data. Other stores use it to identify banned patrons. But this can misidentify innocent patrons, especially if they are people of color, as happened to Lamya Robinson at a roller rink. Still, other stores use face identification, tracking, and analysis to serve customers targeted ads or track their behavior over time. This is part of the larger problem of surveillance-based advertising, which harms all of our privacy.

There are many other kinds of threatening corporate uses of face recognition. For example, some companies use it to scrutinize their employees. This is just one of many high-tech ways that bosses spy on workers. Other companies, like Clearview AI, use face recognition to help police identify people of interest, including BLM protesters. Such corporate-government surveillance partnerships are a growing threat.


The Solution

Of all the laws now on the books, one has done the most to protect us from corporate use of face recognition: the Illinois Biometric Information Privacy Act, or BIPA.

At its core, BIPA does three things:

  1. It bans businesses from collecting or disclosing a person's faceprint without their opt-in consent.

  2. It requires businesses to delete the faceprints after a fixed time.

  3. If a business violates a person's BIPA rights by unlawfully collecting, disclosing, or retaining their faceprint, that person has a "private right of action" to sue that business.

EFF has long worked to enact more BIPA-type laws, including in Congress and the states. We regularly advocate in Illinois to protect BIPA from legislative backsliding. We have also filed amicus briefs in a federal appellate court and the Illinois Supreme Court to ensure that everyone who has suffered a violation of their BIPA rights can have their day in court.

BIPA prevents one of the worst corporate uses of face recognition: dragnet faceprinting of the public at large. Some companies do this to all people entering a store, or all people appearing in photos on social media. This practice violates BIPA because some of these people have not previously consented to faceprinting.

People have filed many BIPA lawsuits against companies that took their faceprints without their consent. Facebook settled one case, arising from their "tag suggestions" feature, for $650 million.


First Amendment Challenges

Other BIPA lawsuits have been filed against Clearview AI. This is the company that extracted faceprints from ten billion photographs, and uses these faceprints to help police identify suspects. The company does not seek consent for its faceprinting. So Clearview now faces a BIPA lawsuit in Illinois state court, brought by the ACLU, and several similar suits in federal court.

In both venues, Clearview asserts a First Amendment defense. EFF disagrees and filed amicus briefs saying so. Our reasoning proceeds in three steps.

First, Clearview's faceprinting enjoys at least some First Amendment protection. It collects information about a face's measurements, and creates information in the form of a unique mathematical representation. The First Amendment protects the collection and creation of information because these often are necessary predicates to free expression. For example, the U.S. Supreme Court has ruled that the First Amendment protects reading books, gathering news, creating video games, and even purchasing ink by the barrel. Likewise, appellate courts protect the right to record on-duty police.

First Amendment protection of faceprinting is not diminished by its use of computer code, because code is speech. To paraphrase one court: just as musicians can communicate among themselves with a musical score, computer programmers can communicate among themselves with computer code.

Second, Clearview's faceprinting does not enjoy the strongest forms of First Amendment protection, such as "strict scrutiny." Rather, it enjoys just "intermediate scrutiny." This is because it does not address a matter of public concern. The Supreme Court has emphasized this factor in many contexts, including wiretapping, defamation, and emotional distress.

Likewise, lower courts have held that common law claims of information privacy--namely, intrusion on seclusion and publication of private facts--do not violate the First Amendment if the information at issue was not a matter of public concern.

Intermediate review also applies to Clearview's faceprinting because its interests are solely economic. The Supreme Court has long held that "commercial speech," meaning "expression related solely to the economic interests of the speaker and its audience," receives "lesser protection." Thus, when laws that protect consumer data privacy face First Amendment challenge, lower courts apply intermediate judicial review under the commercial speech doctrine.

To pass this test, a law must advance a "substantial interest," and there must be a "close fit" between this interest and what the law requires.

Third, the application of BIPA to Clearview's faceprinting passes this intermediate test. As discussed earlier, the State of Illinois has strong interests in preventing the harms caused by faceprinting to privacy, racial justice, free expression, and information security. Also, there is a close fit from these interests to the safeguard that Illinois requires: opt-in consent to collect a faceprint. In the words of the Supreme Court, data privacy requires "the individual's control of information concerning [their] person."

Some business groups have contested the close fit between BIPA's means and ends by suggesting Illinois could achieve its goals, with less burden on business, by requiring just an opportunity for people to opt-out. But defaults matter. Opt-out is not an adequate substitute for opt-in. Many people won't know a business collected their faceprint, let alone know how to opt-out. Other people will be deterred by the confusing and time-consuming opt-out process. This problem is worse than it needs to be because many companies deploy "dark patterns," meaning user experience designs that manipulate users into giving their so-called "agreement" to data processing.

Thus, numerous federal appellate and trial courts have upheld consumer data privacy laws that are similar to BIPA against First Amendment challenge. Just this past August, an Illinois judge rejected Clearview's First Amendment defense.


Next Steps

In the hands of government and business alike, face recognition technology is a growing menace to our digital rights. But the future is unwritten. EFF is proud of its contributions to the movement to resist abuse of these technologies. Please join us in demanding a ban on government use of face recognition, and laws like Illinois' BIPA to limit private use. Together, we can end this threat.



Previous Page              Top              Next Page












Jupiter Broadcasting Linux Action News Linux Unplugged Linux Headlines Tech Snap Choose Linux BSD Now Jupiter Broadcasting