In the wake of the Cambridge Analytica scandal and Facebook CEO Mark Zuckerberg’s recent testimony before Congress, there is growing concern over data privacy and how companies like Facebook handle user information. Where this information ends up, and how it is being used—largely without our direct knowledge—seems limitless.
These recent events demonstrate the power that Facebook, and other social media platforms, can wield over all aspects of our culture and politics. Notably, during his nearly 10-hour congressional testimony, Zuckerberg acknowledged that data firm Cambridge Analytica improperly obtained Facebook users’ private data and used the information to help political clients, including Donald Trump’s 2016 presidential campaign. This data breach affected up to 87 million Facebook users.
All of this has created a significant public backlash against Facebook, among other social media and “big data” companies. People are starting to demand more robust restrictions on the corporate use of their private data.
On this front, the Illinois legislature was actually ahead of its time.
In 2008, the state legislature enacted the Illinois Biometric Information Protection Act (BIPA), aimed at safeguarding Illinois residents against biometric identity theft. BIPA defines “biometric identifier” as “a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry.”
Illinois lawmakers tackled the issue of biometric data because they recognized that biometrics, as compared to other personal data such as alpha-numeric passwords, credit card numbers, or social security numbers, are biologically unique to the individual; therefore, once compromised, the individual has no recourse and is at heightened risk for identity theft. The Act also recognizes that the future may include biometric-facilitated transactions and that individuals may “withdraw from” and be “deterred from” partaking in these transactions because they’re poorly regulated. And this could have a negative effect on the high-tech industry in the state.
To address these concerns, BIPA states that no private entity can collect, capture, purchase, obtain, or store a person’s “biometric identifier” without obtaining prior written consent. Further, any private entity that possesses biometric identifiers or information—such as fingerprints or faceprints—cannot sell, lease, trade, or otherwise profit from such identifier or information, or disclose or disseminate it to third parties, except under specific circumstances. BIPA also sets standards for how biometric identifiers or information can be stored and transmitted, and how it should be protected, and it mandates that companies possessing biometric identifiers have written, publicly-available policies for their retention and destruction.
Not all biometric data, however, falls under these prohibitions and requirements. BIPA specifically excludes other identifiers, such as, writing samples and signatures, physical descriptions (e.g., height, weight, eye color), and information or images collected in a health care setting for the purpose of diagnosis or treatment.
BIPA is Starting to Have a Real Impact
With advances in technology, the collection and use of biometric information is on the rise. Devices with fingerprint and faceprint capabilities are now a part of our everyday lives (take the iPhone’s Touch ID and Face ID, for example). Thus, it comes as no surprise that, while BIPA has been on the books since October 2008, the last several years have seen an uptick in lawsuits brought under the act.
Class action lawsuits brought under BIPA have been particularly popular. A “class action” lawsuit is one in which a group of people with the same or similar injuries caused by the same product or action sue the defendant as a group. In the last several years, plaintiffs have filed class action claims under BIPA against Facebook, Snapchat, Shutterfly, and Google related to their use of facial recognition software in conjunction with users’ photographs, and against video game manufacturer Take-Two Interactive Software related to the creation of personal avatars. Plaintiffs have also filed several lawsuits involving fingerprinting. The plaintiffs bringing these suits have had varying degrees of success—some have been dismissed early in the lawsuit, some have settled, and some remain ongoing.
On April 16, 2018, for example, a Federal Judge in the Northern District of California certified a class of Facebook users from Illinois in a case involving Facebook’s use of facial recognition software and BIPA. Facebook launched its facial recognition—or “faceprinting”—technology in 2011 as the Tag Suggestions functionality.
The court defined the class as all “Facebook users located in Illinois for whom Facebook created and stored a face template after June 7, 2011.” This class action lawsuit originated as three separate cases brought by individuals in Illinois federal court and was subsequently transferred, by agreement of the parties, to California where they were consolidated into a single action. BIPA is unique in that it gives private parties—including individuals—the right to sue. Reckless or intentional violations of BIPA can carry damages of more than $5000.
What exactly does Facebook collect from its users, what does it do with that information, and why does it allegedly violate the BIPA? According to its Terms of Service, Facebook “must collect and use your personal data.” Facebook sets out its practices for data collection and use in its accompanying Data Policy, which users “must agree to in order to use [its] Products.” The Data Policy opens with the following notice: “To provide the Facebook Products, we must process information about you.” All of these “musts” make clear that Facebook is in the business of harvesting and using—for financial gain, of course—your personal data.
Facebook states that it collects “the content and other information you provide when you use our Services, including when you sign up for an account, create or share, and message or communicate with others.” Facebook keeps the information it collects “for as long as it is necessary to provide products and services to you and others,” and acknowledges that it shares users’ data with third parties, unless the user prohibits sharing via the platform’s privacy controls. One can imagine that the sum of these activities amounts to a staggering amount of data collected by the tech giant. As of March 2018, Facebook claimed an average of 1.45 billion daily active users, most of whom likely have not placed any restriction on what data Facebook can collect, or how it can be used.
While not explicitly called out in the Data Policy, it is now well known that Facebook also collects and uses data embodying the facial characteristics of those users appearing in photos uploaded to the platform. Facebook uses this information “to provide shortcuts and suggestions to you,” such as “suggest[ing] that your friend tag you in a picture by comparing your friend’s pictures to information we’ve put together from your profile pictures and the other photos in which you’ve been tagged.” Users may, however, control whether Facebook suggests that another user tag them in a photo.
How does Facebook’s facial recognition software work, exactly? According to its information page, in order to determine if you appear in a photo or video—whether uploaded by you or another user—Facebook “compares it with an analysis of photos and videos such as your profile picture and photos and videos that you’re tagged in.” More specifically, the “Tag Suggestions” software utilizes a four-step facial recognition process:
A “face signature” is a “string of numbers that represents a particular image of a face.” A “face template” is “a string of numbers that represents a boundary” between the face signatures of a given Facebook user and the face signatures of others, and is calculated based on that user’s photographs. A face template is based on geometric relationship of their facial features, like the distance between their eyes, nose and ears. If a computed face signature falls within the boundary described by a user’s face template, Facebook suggests tagging the user. According to court documents, Facebook represents that it only stores face templates, and not face signatures.
The plaintiffs allege in their Complaint filed with the court that Facebook violated the BIPA because, among other things, it did not inform users that their biometric identifiers (face geometry) were being generated, collected or stored, or obtain a written release from users which would allow it to collect, capture, or obtain their biometric identifiers. The case is currently set to go to trial on June 9 of this year. If Facebook is found to have violated BIPA through use of Tag Suggestions, not only could it be liable for damages to the class, but it could change how the platform interacts with users in Illinois. For example, Facebook could elect to obtain the requisite written consent from all Illinois users, or it could disable the Tag Suggestions functionality with respect to users in the state.
It will be interesting to see what happens in this case in the months to come. Facebook very clearly values its facial recognition and the Tag Suggestions software and has obtained several patents directed to this technology, including U.S. Patent No. 5,164,992, U.S. Patent No. 6,292,575, U.S. Patent No. 6,681,032, U.S. Patent No. 8,666,198, and U.S. Patent No. 9,143,573. There’s a lot of money at stake, and Facebook has a lot to lose.
Erin Conway is a partner at the firm Amin Talati Upadhye LLP in Chicago and specializes in Intellectual Property law.