Toronto police were using Clearview AI

And we talk just a wee bit more about coronavirus

 The big news today is that Toronto police have admitted, a month after denying it, that they were using Clearview AI facial recognition technology.

Or at least they were using it until Feb 5th, when Chief Mark Saunders found out, according to a spokesperson. However, the media weren’t notified until today, Feb 13th -  and in fact, CBC’s Jayme Poisson and Global’s Rachel Brown posted tweets within five minutes of each other, while The Star tweeted a story about ten minutes later so it was a story rollout shaped by police.

That makes sense when you consider TPS has also said they’ve asked Ontario's Information and Privacy Commissioner to review whether Clearview AI is an appropriate investigative tool. 

Certainly it raises questions about why some police officers were using such invasive technology without the awareness or oversight of the chief and board. 

Another thing to note is that in the original New York Times story by Kashmir Hill as well as one of the follow up stories, it doesn’t sound like just one agency in Canada.

In her first story she writes: “Federal law enforcement, including the F.B.I. and the Department of Homeland Security, are trying it, as are Canadian law enforcement authorities, according to the company and government officials.”

Her lede on the second story is this: “Law enforcement agencies across the United States and Canada are using Clearview AI — a secretive facial recognition start-up with a database of three billion images — to identify children who are victims of sexual abuse.”

So, continue to pay attention to this story, I’d say.

Virus disinformation slowdown

I think coronavirus disinformation is slowing down a bit, partly because it seems like some of the accounts and things I monitor have turned their attention to other things, to the primaries in the U.S. and Trump’s acquittal, or to the pipeline demonstrations here in Canada.

But it’s a gut feeling and it’s hard to quantify, because there’s lots of information I just don’t have access to. And that relates to a thing that happens more frequently to me now as journalist - tech people really want to talk on background.

Kate Conger’s tweet on this struck me: 

Background is a journalism practice where a source wants to give information but doesn’t want that information attributed to them in a piece. It can mean no direct quotes or it can mean quotes but no identification and it’s something tech companies try all the time.

For this piece, I tried to get tech to go on the record about how they’re fighting disinformation related to coronavirus. The reps for Facebook/Instagram, Reddit, Twitter and Google/YouTube all offered to speak on background, and when I said no, they either then directed me to blog posts made by the company, or, as the Reddit rep did, answered two questions but wouldn’t give a longer interview. In Google’s case, they also offered some additional information about fundraising efforts, but none of the information they offered was anything other than what had already been said publicly.

This matters because it’s a way for tech companies to control their message and to try to shape how journalists cover them. Everything is planned and controlled and it takes away the ability of journalists to get information, especially information that hasn’t already been prepared to cast organizations in a helpful light.

And it’s not like platforms don’t have information to give. Contrast how Facebook didn’t answer my questions about their blog post about removing three networks engaging in “coordinated inauthentic behaviour”, networks originating in Russia, Iran and Myanmar and Vietnam.

Here’s they’re specific: Today, we removed 78 Facebook accounts, 11 Pages, 29 Groups and four Instagram accounts for violating our policy against foreign or government interference. This activity originated in Russia and focused primarily on Ukraine and neighboring countries.

Disinformation, though, is more than just foreign interference, so it would be great to see platforms be more open what they do to remove  all kinds of disinformation.

Lastly, here’s a video of me on The National, looking at people around the world debunking disinformation related to coronavirus. Anand Ram produced it, and it was his hard work that got us videos from people around the world sharing their own efforts. I sometimes like to think of fact-checkers as a little global club, and I feel like this piece gets at that, while also taking down some of the common coronavirus myths we’ve seen.

I am away next week so no newsletter on Feb 20th, but this newsletter will return on Feb 27th.