Illicit Child Pornography Images Tracked Without Other Privacy Rights Invaded
The search giant Google made headlines again this week when it was announced that an individual allegedly involved in a Child Pornography exchange using the company’s Gmail product. The most notable part of this story, however, was that the suspect was caught by Gmail’s own initiative, through the matching of the purported child pornography images with known illicit images previously documented by the National Center for Missing and Exploited Children (“NCMEC”) as violative material.
But is it an invasion of privacy for Google to scan it’s users’ Gmail message content looking for illegal (or even questionable) material?
Child Pornography Crimes as an Exception to Privacy
The answer to this question, at least for now, lies in the company’s Terms of Service (“ToS”), which clearly explain that users of the many Google services are subject to having their data read and processed. In fact, this is primarily how most of Google’s services function effectively, utilizing data from one product (like Gmail) to the next (Google Calendar), seamlessly.
Specifically, Gmail’s terms of service state, with regards to child pornography and illicit images:
“If we become aware of such content, we will report it to the appropriate authorities and may take disciplinary action, including termination, against the Google accounts of those involved[.]”
If that weren’t enough, we also already know that Google processes the data in our Gmail messages in order to provide relevant advertising to us within our inboxes. The only difference here is that the processing of certain data could now lead to criminal charges for alleged violations of Child Pornography laws.
However, Google has publicly addressed what limits are placed on how far the provider will go to serve it’s idea of justice. A spokesman for Google has already provided some clarification to put privacy advocates at ease:
“Sadly, all Internet companies have to deal with child sexual abuse. It’s why Google actively removes illegal imagery from our services — including search and Gmail — and immediately reports abuse to the NCMEC.
. . .
It is important to remember that we only use this technology to identify child sexual abuse imagery — not other email content that could be associated with criminal activity (for example using email to plot a burglary).”
Google’s Child Pornography Image Filters and Your Data
The technology behind how this all works – and how it keeps most users’ privacy intact – all lies in the matching of “hashes”, or unique alphanumeric fingerprints of image files that have already been previously deemed illicit material. Google, the NCMEC, and various other state and federal agencies have worked together to develop a database containing these images, and assign hashes that can be matched to the content being transmitted or stored on Google’s services and web search index. Matching hashes can trigger tips to law enforcement, while other email content is kept private. Even the content of innocuous imagery is kept private and beyond the eyes of Google’s human employees since it is only the “hashed” fingerprints that actually get read by the company.
The suspect whom Google recently turned over to authorities has been charged with one count of possession of child pornography and one count of promotion of child pornography. He remains in custody on a $200,000 bond.