Sweden clearly gets consent. When will the U.S.?
In all of the headlines I’ve seen about Sweden’s first GDPR fine, the focus is on facial recognition:
-”Facial recognition in school renders Sweden’s first GDPR fine,” says the EDPB
-”Facial recognition in schools leads to Sweden’s first GDPR fine,” says the NextWeb
-”Facial recognition: School ID checks lead to GDPR fine,” says the BBC
You get the idea. And, yes, facial recognition is involved in the case. For sure. But everyone’s missing the point. It’s not that facial recognition was used. It’s that the consent that the school acquired for using the facial recognition software and cameras was ruled invalid due to the power structure involved.
Yes, this is a case about consent, first and foremost, and about what “consent” really means. It’s vital that we in the U.S., especially, pay attention to the ways that Europeans are coming to define consent and begin to extrapolate those lessons into the U.S. regulation of personal data collection and use.
It would be easy to look at the headlines and say, “oh, you can’t use facial recognition in schools,” but that’s not what the Swedish DPA has ruled at all. Rather, it said that relying on consent for the lawful basis to process the sensitive biometric data of students (really, it could have “double sensitive,” but Sweden has set its age of consent at 13, so these kids are likely not “children,” as defined by the GDPR) is not valid “given the clear imbalance between the data subject and the controller,” as the EDPB reported.
This must have come as some surprise to the school. They likely were very transparent with the whole situation, alerting parents, getting them to sign paperwork, and having all of their ducks in a row. This is the contractual way we’ve done privacy, historically, and there are clearly those in the U.S., especially, who would find it very strange, indeed, that having parents sign off on the use of the technology would not make it valid.
But this is one of the areas where the GDPR is truly visionary and revelatory, as it realizes those parents likely didn’t feel like they had a choice in the matter (nor did the students, I’m sure). What would have happened if they said “no”? Would the school not be able to do this thing they wanted to do? Would they be accused of blocking progress? Would they be accused of wanting their kid to have “special status”? Would their kid be identified as the one with “the weirdo parents who want their kid to be able to skip school”? There are pretty obvious social repercussions for not granting the consent.
Thus, this situation runs afoul of Article 43, which defines “freely given” consent: “In order to ensure that consent is freely given, consent should not provide a valid legal ground for the processing of personal data in a specific case where there is a clear imbalance between the data subject and the controller, in particular where the controller is a public authority and it is therefore unlikely that consent was freely given in all the circumstances of that specific situation.”
I think a school is pretty clearly a “public authority,” so it’s unlikely anyone did even a little bit of GDPR research before pulling the trigger on this process, but it’s also a “clear imbalance” on its face.
Well, then, you might ask, how on earth could you ever get any data collection done in a school system? I think you could probably get through it via the “legitimate interest” condition, and it seems like that’s where the Swedish DPA is pointing. The EDPB notes that the fine was given partly because the school “failed to do an adequate impact assessment including seeking prior consultation with the Swedish DPA.”
Here’s how I could see the situation going:
The school sees that taking attendance is a waste of valuable class time and teacher resources are scarce.
They research and discover a facial recognition program can be set up which does nothing but create a binary list for each day, present or not present, but retains no other data, automating attendance.
The facial recognition software retains no actual image of any student, but instead retains some kind of code for each student, which their image is turned into each day when they enter the classroom and is pseudonymized by associating the students with the code but then keeping the files separate, to create the binary list.
A DPIA is conducted that shows very little personal data is collected and there is a very low risk of harm to the students, and this is shared with the DPA.
The time savings are considered valuable by the school and the DPA agrees.
They test the system (as they tried to do) to make sure the cost and time savings are realized and that it accurately works over a given period of time and that there is no ancillary or unexpected personal data gathering.
Finally, given the successful test, the school is able to successfully argue that they have a legitimate interest for the processing, with a plan to regularly re-check the system and make sure it is performing as implemented.
No consent needed. Everything’s legal. Ta-da.
Thus, facial recognition is deemed fine for use and then they can roll it out school-wide. Even better, all of the other schools could take a look at that DPIA, conduct a similar exercise to make sure it applies to their own situation, and then roll it out in their own schools, with confidence.
I know, I know: That’s way more work than just getting parents to sign a form!
Yes, but it’s also much more ethical and rigorous and designed to reach an outcome where the use of personal data is actually for a valid and important purpose, rather than just because it seems cool or new or fun or easy.
This is where we have to go in the United States. Not only do schools have a power imbalance over their students, but these huge data-hogging platforms have a power imbalance over the American people. Can you be in the American white collar workplace and not use LinkedIn? Can you be an artist or in a band and not be on Facebook or Spotify? You get the idea.
These platforms shouldn’t be using check-box consent forms to collect personal data, they should be forced to demonstrate that their use of the data is necessary for the provision of the service people think they’re getting. People can’t “consent” to data collection they barely understand on platforms they are essentially forced to use.
We have to find a better way. The GDPR offers a path forward.