Jordan: I’m going to start today with a question I’m thinking a lot about these days. In order for us to go back to lives that are even close to normal, we’re going to have to be able to track contacts of people who test positive for COVID-19. And we’re going to have to be able to do that unbelievably quickly. And so, knowing where that has gone in other countries, the question is, and I’ll put it to you, Claire, how much of your privacy are you prepared to give up if it means that you can go out to a restaurant or see a movie again?
Claire: You know, I want to say none. But I also know the reality of the world we live in, and we’ve even talked about it on this show a few times. We’re already being tracked every single day, and a lot of us are at the point now where we just willingly give her information for things that make our lives easier. This situation feels different, but I don’t know, is it? Is it too late for me to protect my information? I guess the real question I would have is, how would my information be used?
Jordan: Right. And I think I actually said this back on one of the episodes we did about facial recognition, I basically operate under the assumption that if I’m out in public, I’m being tracked somehow. It sounds slightly paranoid, but it honestly isn’t that far from the truth. So my instinct would be to actually just say, sure, go for it. But of course, it’s not just my privacy I’m talking about, since presumably tracking me would provide information on my friends, and my family, and even you, Claire. Some program, somewhere, would know that we speak or meet every weekday. And they’d know what time we did and where we did it and for how long, and everything. So even if you didn’t opt in, even if you didn’t want to make that trade, I kind of made at least part of it for you.
Claire: Yeah, that’s another layer that most of us don’t consider. And the other thing I think that we don’t really consider is how a surveillance system could even be implemented in certain places. And when I say certain places, I mean the democracy that is Canada.
Jordan: Yeah. But something tells me that that’s where this conversation is going soon. Beyond the If and into the How. So we decided to talk to someone who’s done a lot of research and a lot of thinking about how the sorts of surveillance programs used in corporations or in less democratic countries could be adapted to be, well, maybe something that doesn’t make half of us automatically recoil. So we’re going to see what’s inevitable and what’s impossible, as soon as Claire gives you everything you need to know about COVID-19 in Canada headed into this week, because I hope you spent some time this weekend not thinking about this.
Claire: Provinces across the country are set to start easing restrictions today. Ontario, Quebec, Alberta, Manitoba, and Saskatchewan are each allowing certain businesses to open and lifting some bans. Manitoba is going the furthest by allowing museums, libraries, and retail businesses, including restaurants, to reopen at half capacity. Ontario and Quebec, the two hardest hit provinces, are not going as far. Ontario’s allowing a small number of mostly seasonal businesses to reopen, while Quebec is easing the lockdown on most retail stores outside of the Montreal area. These restrictions come after a weekend in which thousands more cases of COVID-19 were identified, and in which a much anticipated portable test made to detect COVID-19 in under an hour was recalled. The Ottawa company who made it, Spartan Bioscience, says it’s voluntarily recalling the product after Health Canada expressed concerns. The federal government approved this device last month and was planning on sending some to remote and Indigenous communities where there is limited access to testing. Prime Minister Justin Trudeau announced the government is investing $240 million to boost access to online health services, including mental health support and virtual access to doctors. He also announced $175 million to support a Vancouver biological company, which Trudeau says has shown promising signs of progress in identifying antibodies that could be used to create a vaccine or treatment for COVID-19. As of Sunday evening in Canada, 59,474 cases of COVID-19 with 3,774 deaths.
Jordan: I’m Jordan Heath Rawlings, and this is The Big Story. Jesse Hirsch is a researcher and a futurist, and he writes a newsletter, which you can find at metaviews.ca. Hi, Jesse.
Jesse: Hey Jordan.
Jordan: You staying safe? How’s everything going?
Jesse: One day at a time, but otherwise, okay.
Jordan: And I imagine you’ve been thinking a lot about what it looks like as we try to use technology to get a handle on this.
Jesse: Well, absolutely. And you know, part of it is my own interest in technology, but I also happen to have a compromised auto immune system. So I’m motivated both intellectually and physically as it were, to really find the fastest path for all of us to get out of this.
Jordan: Well, why don’t you explain for us, the hope of using technology to help us trace the path of the infection and the contact has had with people? How would that actually work in theory at least?
Jesse: Well, on the one hand, I think contact tracing is essential for both our ability to manage this pandemic as well as our, the path we need to take out of it. And technology, I think is incredibly tempting because it offers the allure of automation, in that in present terms, you know, contact tracing can be really labour intensive. It takes a lot of public health resources. But if you could get technology to automate the mapping of who’s been infected and from what source, there’s a lot of reasons as to why that would be quite appealing and to why that might make people think that it could offer us a shortcut or a quick path out of this pandemic. And it also, I think reflects the ubiquity of technology, that for those of us who have smartphones, they’re kind of central to our lives. And it’s easy to make the assumption that everybody has a smartphone, and so if we could use those smartphones to really track and manage and collect a ridiculous amount of data about the way in which this virus is transmitted and spreading, it’s natural why that would be quite tempting, and something that legitimately researchers around the world have been investigating for a few months now.
Jordan: So if that’s the theory how does it work in practice? Or does it, when you stop to think about it?
Jesse: Unfortunately it doesn’t work, for two primary reasons. Number one is it’s just not accurate enough. There’s too many false positives. It’s too easy to sort of pollute the data or get garbage data. And then the second reason is not everyone has a smartphone. So on the one hand, it doesn’t produce enough accuracy to really meet the needs of public health officials, let alone accurately trace. And then the other issue is you’re already bifurcating the population in two, where you’re all of a sudden treating everyone who has a smartphone as privileged because you’re providing them with this type of contact tracing service. Whereas there still is a sizable amount of the population who do not have the same access to technology or who may be live in a part of the country where connectivity isn’t so great. So they may have a device, but that device isn’t communicating necessarily with the network. So it’s the danger of both creating a false expectations, of making people believe that the solution will work when it won’t, when instead it can facilitate further transmission. And then the other is kind of data integrity or data security, that because it is so inaccurate, it lends itself to sabotage. It lends itself to lying, where you know, people could say that they were exposed because they don’t want to go to work, or conversely hide the fact that they’d been exposed because they do want to go to work. And so, it’s that margin of error being so wide that a growing number of researchers are saying, Hey, it’s a great idea, but unfortunately it’s not one that’s going to work.
Jordan: Okay, so assuming that smartphone tracing won’t work, and assuming just for the purposes of this conversation, and maybe also because we’ve seen no real sign of ramping up these thousands of health officials that you would need to do human-based tracing, what are the other options out there?
Jesse: Well, there’s other types of surveillance options that certainly look beyond, say, what each individual smartphone is capable of. And this kind of location-based data is currently being used, especially when it comes to trying to gather whether people are complying with physical distancing. But I’m actually hypothesizing that we’re going to get a scoring system, or something that’s often referred to as a social credit system. And this basically assigns each individual a score that could reflect things like the probability of whether you’ve been infected, or can reflect confirmation that you have and you now have immunity, or it might reflect that you’re an essential worker and therefore should have a different status than people who should be staying at home, or people like myself with a compromised auto immune system, or people who are otherwise vulnerable and therefore should be taken greater cautions. It’s the idea that in using technology and using surveillance, there has to be a translation. There has to be a way in which we take all of that data, we take all the models that are being created, and we literally convert it into a score assigned to each individual that perhaps either helps inform them as to their risk or their likelihood of being infected, or conversely their safety, or it influences decision makers or influences businesses as to how they should manage their customers or their staff. So it really acknowledges that people are hungry for data, and it creates a type of system in which we can evaluate individuals based on available data and based on their relationship both to the virus and to the rest of the society.
Jordan: Where would all that tracking and surveillance data come from in order to be used to compile this score? What would we be watching people with essentially?
Jesse: Well, on the one hand, there is the potential for voluntary, where if you had a scenario like this in which you asked people to comply, they could then install an app on their phone, or even manually enter information about where they’re going or what their vital signs are, or what type of symptoms they may have. So on the one hand, it could be entirely done on an opt-in basis, where you’re encouraging people to provide more information. And in exchange, you’re providing them with greater certainty. You’re providing them with a greater sense of safety when it comes to how they interact with the outside world. But on the flip side, this could be done without people’s consent. It could be done without people opting in, in terms of, you know, quite frankly, we already live in a surveillance society, and there is already a ton of data available about us that depending on who has access to that data, whether it’s a telecom provider or whether it’s an eCommerce provider like Amazon, who, given the increase in the amount of online shopping people are doing, they’re probably getting a much more accurate picture of where we are, both in terms of our health status, but also in terms of our work status, or even physically, location based where we are. So it’s kind of a slippery slope in terms of the amount of data that’s out there and the fact that these types of scoring systems kind of already exists, just in isolation. And it would be a matter of marshalling them or directing them towards, in this case, a public health application.
Jordan: When you start talking about collecting the kinds of data that various corporations or organizations have on us, and combining it and letting the government look at that combination, if we’re talking about slippery slopes, that feels like we’re heading down towards stuff that people talk about in conspiracy theories to me. What’s the big one out there that this could lead to?
Jesse: Well, I mean, it’s funny you say that cause it was actually the proliferation, or I like to call it the Renaissance of conspiracy theory that we’re now in, that that prompted me to engage in this type of speculation. And to be clear, I am not proposing such a system. Rather, I’m anticipating such a system. And as a technology researcher to critic, I’m kind of alerting people to, Hey, this is something we should discuss in a democratic society, as to whether this is a positive thing or something that needs regulation. So to your point, I think it would potentially alarm people. But I’m also not sure that it’s necessarily the government that would be running such a system. Because, for example, Facebook could decide that they’re going to run such a system, as could Google, AKA Alphabet. And in fact, a private company on its own could decide to start collecting this type of data and then turn to people, turn to users and say, would you like to opt in? Would you like to volunteer this data to us? So that’s where I think it could manifest in a bunch of different scenarios. But I think, to go to your question, in terms of the worst case scenario, on the one hand it does cement or crystallize a kind of surveillance society that science fiction has warned us about. But on the other hand, we are seeing an example like this emerge in China. And that was my other inspiration for this analysis, in that there is a set of social credit systems that are being developed both by the Chinese government and separately by Chinese industry as a way to basically evaluate people or incentivize people.
Jordan: Yeah, can you explain what that looks like in China? Because my mind obviously went to China as soon as this discussion came up, but I feel like I might not have a clear picture beyond like, Oh, China scores its citizens.
Jesse: Well, and in fact, the way in which China scores its citizens is actually quite complicated because it’s not unified. Like there isn’t a central Big Brother system assigning people a score. It’s that the government has encouraged industry to build these types of social credit systems. And so different companies have built their own, different watchdog organizations have created scoring or social credit system to monitor companies, to look at, for example, whether companies comply with environmental regulations, or whether companies have a safe workplace. So it’s basically using surveillance and data to then rank people, to create a score as to whether someone’s a good citizen, whether a company is a good company, or whether users on a social network are behaving and not abusing each other. And so where China is doing this really quite openly and at a large scale, it is actually happening in North America, just not with the same attention. I mean, at a basic level, it’s our credit score, right? Which we all have in terms of determining whether a bank or a lender or a company will give us credit. But then social media companies and other companies have built scoring systems. The most notorious here in North America was one built by Airbnb in which Airbnb was using a score both to rate their hosts and their clients, but they were using it as a way to prevent people in the sex trade, or people using Airbnb listings for prostitution or for escort services. And so they were creating this type of ranking to try to decide who was using Airbnb for purposes of the sex industry and who wasn’t. And it was exposed and it became a bit of a scandal, but it sort of illustrated that any organization, any company that organizes data and uses algorithms, can assign people to score. And whether that’s a score of how trustworthy you are or what your reputation is or what kind of an Uber driver you are, increasingly there are companies building these types of systems and I just think kind of like a force of gravity. It’s just a matter of time before they start connecting and interlinking to create meta scores or larger scores that can, in this case, help evaluate our public health status.
Jordan: So that’s kinda what I was talking about with our producer Claire in the intro to this show, is that I walk around and I think we’ve might’ve even talked about this before in some of our conversations, that I walk around and behave online as though I am being tracked by someone or something at all times. Cause I realize that I probably am. But what really makes me sort of stop and wonder about it is what it would actually look like on the ground in Canada if the government were to say, and again, I’m using the government just because it’s the easiest way to go, okay I want to combine all those entities that are tracking you, whether it’s, you know, cameras when you’re walking around, or your smartphone or my Amazon orders or my Facebook status, whatever, and put it into one combined score? How would, in a democracy like ours, how would the government go about that? What would be the process?
Jesse: Well, I think the first issue is privacy, in that while it certainly should alarm people as to the scope and depth of such a surveillance campaign, it is in theory possible to structure such a system while preserving people’s privacy. And part of that has to do with the access and scope of such a surveillance program. Because as you mentioned, a lot of this information already exists. So in theory, the government would not have to engage in extra surveillance, they’re just collecting the surveillance. And then, for example, they could place restrictions that say law enforcement may not access this. CRA, the tax authorities cannot access this. Political parties and politicians cannot access this. And that the only people who can access this information are public health officials for public health applications.
Jordan: That sounds like a tough sell to the public, to be frank.
Jesse: Absolutely. And let’s be clear, I don’t believe that this would necessarily be a sell. But this is me trying to imagine how this would happen in a democratic context. Because I think we’re talking under emergency measures, right? Under emergency measures in which thousands of Canadians are dying, in which the virus is literally devastating, if not dismantling, the economy. And it literally becomes a kind of national security issue, both because people are dying, but also because your economy is grounding to a halt. And so you know, this could happen after, let’s say a second wave being much more devastating than the first and reaching the general population in a way that the first didn’t. So maybe there’s a lot of people who just lost a loved one, or who fear losing a loved one, and are therefore motivated by the idea that a system like this could actually gain, if not control of the transmission, at least quantify and understand the transmission. Because let’s face it, as it stands now, we’re still in the dark. We’re still not able to test people at a scale or at a speed that’s necessary. So I could see a scenario in which emergency conditions, merit emergency measures. And you could structure something like this in which the government collects the data, but only for public health purposes. And, and that’s where I think there is room to do this in a democratic society. But then the other issue is the criteria, in that it would have to be transparent. People would have to be able to say, well, I don’t want to discriminate against the elderly, or I don’t want to discriminate against people who are disabled, or I don’t want to make it easy for people to lie and therefore pollute the data with fraudulent data or with misinformation. So there’s a lot of ways, just as we discussed how automatic contact tracing can go wrong, well, there’s a lot of ways in which a scoring system would go wrong. And I think that’s why it has to be transparent and it has to be restricted and it has to be something that society can discuss, especially if they’re afraid, especially if they’re concerned. Whereas the whole reason I’m bringing this up is my fear that this would all be done without telling people, in total secrecy and without disclosing that such a system would take place. Because I do fear that we’re going to reach such a crisis point that this level of pervasive surveillance, and in particular assigning people a score to evaluate their risk or to evaluate their vulnerability, is really high unless we come up with some other solution to manage this crisis.
Jordan: Okay, well, let’s assume for a moment that we do do this and we create this and make it accessible just to public health. How could that data be abused if it was breached?
Jesse: Well, there’s unfortunately a long list of ways in which it can be abused. And I think the easiest is scams. You know, if you knew that someone was particularly vulnerable, then that might make them, you know, you could call them up and either pretend that you’re public health, pretend that you’re their bank, pretend that you’re the police. I mean, just, we are already seeing a kind of pandemic of cybersecurity scams and exploits right now because people are afraid, and that is a very fertile condition for scammers and cybercriminals. But this can also lend itself to political abuse, right? I mean, you know, these are the scenarios in which a dictatorship or an authoritarian emerges because they’re able to take advantage of people’s fears, or potentially identify who’s immune and draft them into some kind of an army or a workforce. You know, granted, that’s ludicrous. I think the biggest threat is that this type of surveillance becomes normalized. That rather than seen as a public health application, it becomes a social control application. Because that is what’s happening in China, is that these systems are not just being used to, you know, make sure that corporations are citizens or are responsible. It is very much used as a form of social control to try to really limit what people do and sort of guide their overall behaviour. So there are a lot of really dangerous and scary applications from this, and I think that’s why we have to be vigilant. But the extent to which technology, and in particular data, is going to play a role in the way in which we chart a course out of this crisis is why I think we need to be thinking about these types of systems. I’m perfectly fine if everyone in Canada said, no way, we’re not going to use this, we’re content to stay at home for two years. That’s fine by me. I’m totally cool with that, but I kind of feel that, you know, come end of summer, maybe even before end of summer, as people start to get stir crazy, and if the second wave is as bad as I fear it may be, I think that’s when desperation starts to become part of the dialogue. And that’s why it helps that we think through these things before that crisis moment, so if we do embrace a system like this it can be responsible. Or if we reject a system like this, we do so with the knowledge that it means a much longer, slower process in terms of finding a path out.
Jordan: How much of our privacy do we have to be prepared to trade for some level of freedom while this is still going on? Cause that that trade off seems inevitable to me now.
Jesse: Well, I would see it as a kind of triad. That it’s not a privacy versus health or privacy versus freedom, but privacy versus freedom versus time. In that time is the real issue, both in terms of how we weren’t prepared for this crisis, so, you know, that’s why we had to take the drastic measures we have, and I think if we continue to remain unprepared, that’s where time will not afford us privacy. That’s where we won’t have the time to preserve our privacy. Whereas I actually think whether it’s contact tracing, whether it’s a scoring system or whether it’s other types of pervasive surveillance schemes, the more time we have, the more we can use privacy by design, the more that we can make sure the scope and application of the data fall under very strict privacy concerns. But the more we kick the can down the road and ignore these questions until it’s too late, that’s when we’re most likely to give up her privacy. That’s when people are more likely to say, yes, I consent to giving up my privacy. Whereas technology researchers like myself, we’ve always said, look, if we plan ahead of time, you can have surveillance and privacy. You can have data analytics and privacy protections. It just requires a lot of heavy lifting at the outset, and if you’re in the middle of the crisis, most people don’t want to do that heavy lifting. And that’s why we kind of have to have these debates as soon as possible. Cause I do think that there’s going to be a push for surveillance and that push is going to come out of desperation, which means that people may be willing to discard their privacy in exchange for health, in exchange for freedom. And it doesn’t have to be that way. It’s not the type of Faustian bargain that we have to make.
Jordan: I hope that we’re more prepared and don’t have to do that. Thanks, Jesse.
Jesse: Thanks Jordan.
Jordan: Jesse Hirsh, a researcher, and a futurist, and you should check out his thinking on this at metaviews.ca. That was The Big Story for more including Jesse’s previous appearances, which are mostly about the evils of Facebook, you can head to thebigstorypodcasts.ca. You can also find us in your favourite podcast player, Apple, Google, Stitcher, Spotify, whatever. And of course you can talk to us on Twitter at @thebigstoryFPN or via email, that’s thebigstorypodcast@rci.rogers.com. And as always, we love to hear from you, whether it’s via email or Twitter, whether it’s on your podcast app via a rating or a review. And we’ll read them even if they’re not positive, I promise. Thanks for listening. I’m Jordan Heath Rawlings. We’ll talk tomorrow.
Back to top of page