Jordan
Yesterday on this show, we spoke about our attention and our focus, and crucially, what control we do and don’t have over it. As you heard our guests say, we have more control than we think we do. We are not helpless by any means. But at the same time, the forces affecting our attention are not passive either. It’s an arms race. And right now we’re losing. It’s not just that social media exists, and it happens to be a place that I visit and I get angry and distracted and fed up with people who disagree with me. It is that, and the algorithms that run those sites do ensure that I see things that do that to me. But it is not the algorithms alone we’re up against. It’s people who want to use those algorithms against us to take something that’s already proven to be effective and hammer us with it until we crack. And not crack as individuals, until we crack as a society. Look, I live in a democracy. I like that. It seems to me these days like something I should appreciate while we have it. Because all you need to do to get a sense of just how ugly things might get is to visit some of the darker corners of the Internet where democracy’s opponents are busy making plans. And memes?
I’m Jordan Heath-Rawlings. This is Interconnected on The Big Story. Today, Part Two: Democracy. Renee DiResta is the technical research manager at Stanford Internet Observatory, which is a cross-disciplinary program of research, teaching, and policy engagement for the study of abuse in current information technologies. Renee herself investigates the spread of narratives across social and media networks with an interest in understanding how platform algorithms and affordances intersect with user behaviour and fractional crowd dynamics. Hey, Renee, can you tell me what you do in English?
Renee DiResta
Absolutely. Thanks for having me. So we study pathological information systems, right? So what SIO does is we look at the Internet, and it’s myriad manifestations, a whole bunch of different apps and platforms and new technologies that have really changed how we communicate today, particularly the way that we participate in communication very directly. Right. We’re all content creators now. At a minimum, we’re content sharers. And one of the things that we look at at SIO is how this new facet of the information ecosystem has changed the way that we receive and process information and relate to each other. So there are four very kind of tangible buckets of our work. One is trust and safety. That’s how do people think about the individual experience that we all have online, right? There’s concerns about children, there’s concerns about exploitation, concerns about mental health, concerns about harassment, brigading. These are the kinds of areas where trust and safety determines how we experience the Internet.
Then there’s information integrity, which is perhaps maybe more at a societal level. How do we think about what kind of content goes viral, how information moves, how we think about a lot of the conversations around what we used to call fake news or propaganda mis and disinformation, rumours. How do we think about ways in which the Internet has transformed our experience of these things, things that have always been around, but we experience them very differently today. And then the last is policy, which a lot of people think of as government regulation, but it can also be self regulatory policy. What are the ways in which we can create better and healthier information systems? What are the ways in which we can respond to some of the harms that we see on the Internet? And can we, as researchers at SIO, make informed policy recommendations based on our findings in the other three areas?
Jordan
That’s a great way to put it. I really understand what you’re saying now, and it’s the information aspect that we wanted to talk to you about today. As part of our little project here, we kind of got together and tried to figure out where the Internet specifically, but technology more generally has changed us as humans. And it was information that was one of the things we kept coming back to. And you mentioned that this is something we’ve always been dealing with, but technology has changed how prevalent it is. Is disinformation completely unavoidable now? How does it embed itself so thoroughly into our everyday experience?
Renee DiResta
So that’s a great question. In all media ecosystems past broadcast, television, radio, print, there’s been false and misleading information. You could walk to the supermarket, checkout counter and pick up a tabloid telling you that there’s a boy who’s partially a bat, right? But what happens today is that information is curated for you. And so you participate in a system that’s very interesting. Recommendation engines suggest accounts you might like to follow. You follow those accounts. Curation algorithms determine what of the accounts that you’re following, what of the content that has been created by those accounts you’re going to want to see. So depending on if you view your feeds in a reverse chronological order, for example, where you’re going to see everything that the people you follow have put out in that kind of decreasing time of when it was created versus a feed that has been curated for you, you’re going to see very different types of information. The stories that hit your feed are going to be very different.
And so there’s an element in curation that really relies on what we call engagement. How many likes or shares something has gotten, what kind of velocity it has. Are a lot of users engaging with it or a lot of users commenting on it? It might be that a curation algorithm then decides that that is something that is very highly engaging, that you might want to see all these other people are talking about it. So you might want to talk about it also. So again, instead of seeing something that just happens to be posted at the time that you open your phone, you’re seeing something that’s maybe a few hours old, but it’s gotten a lot of attention and so some attention actually begets more attention. As you see this content, maybe you feel a certain way about it, so you participate in sharing it. Maybe you click the share button, you click the like button, you go and you paste it to a different platform entirely because you want your friends on that platform to see it. So we all become kind of conduits for determining what kind of content rises to the top.
And so there’s a different type of editorializing that’s happening. It’s not some editorial gatekeeper at an old broadcast property deciding what they’re going to surface or what they’re going to put out and give a platform to. Instead there’s this very participatory process that happens in which people intersect with algorithms and use tools that platforms have given them to continue to propagate information. And then what happens in that dynamic is that there are certain creators, certain content creators who are very good at creating content that gets that initial burst of attention or they have large followings and so they get initial significant lift. They’ve built up audiences over time. And so whose content is seen and how that intersects with, again, algorithmic curation and recommendations has really changed our experience of what we’re seeing on the internet versus what we might have seen on broadcast media.
Sometimes this is a good thing, right? Content that maybe wouldn’t have been picked up by editorial gatekeepers get seen. Sometimes it leads to things that are very sensational or that are highly emotional but not necessarily factually accurate, going viral. And so there’s just this trade off and this tension as we try to think about what is the best way or what is the optimal way to design these systems that push information to us today.
Jordan
And this is a big part of your role at SIO, right, is to examine how people manipulate these systems in order to change, I guess, or control which information we end up internalizing.
Renee DiResta
We are not looking so much at which information you wind up internalizing. That’s actually a really, really hard thing to understand. There are other social science researchers that try to do that work. Just because you see information, just because you hear a news broadcast or read an article doesn’t mean that it’s going to have a major impact on your life or change how you think. What we look at at SIO though is what are the ways in which the tools that we have access to are occasionally manipulated by bad actors.
So for example, we can all tweet, right? Anybody who wants to can go and can go and put out a tweet, but what we see is state actors. For example, we can use China as one recent example, will go, and they will create tens of thousands to hundreds of thousands of fake accounts. And then they too will use Twitter. Maybe they use it–sometimes they’re creating tweets. So these fake accounts, these fake personas are putting out content. Sometimes they’re trying to change the flow of the conversation by retweeting some things quite a bit to try to push them into that field of view, create that engagement, and trick the algorithms into picking them up and showing that content to more people.
So there are ways in which tools that we all have, tools that are actually quite valuable and useful, can be manipulated and used as weapons by bad actors that are trying to push a propaganda campaign or run a disinformation campaign or do something that is manipulative, trying to hijack your attention or hijack a conversation. So that’s where this what we look at a lot of the time is what are the ways in which the system can be manipulated or is being manipulated.
Jordan
And who are the major players in those attempts? You mentioned China. I think everybody would probably automatically give you Russia. Are they all state players?
Renee DiResta
A lot of the times, state players are one particularly well resourced actor. When I say well resourced, I mean, they have access not only to social media, but social media is additive, right? It’s another channel for them, but they also have access to quite a lot of broadcast channels. There are state media properties that have existed for a very long time. They have an online presence now. There are front media properties in which the attribution is not really obvious, and that is a tool within their toolkit.
But beyond that, there are also spammers. So we do see people who have a financial motivation who are trying to manipulate the public. Cryptocurrency is a huge area where manipulative automated accounts are constantly trying to spam people to go and take an action that might be financially harmful to them or to go and to click on a link that might be malicious. So there’s a big dynamic of manipulation around cryptocurrency. And we do see at times also domestic groups within a country trying to, for example, manipulate a political conversation to push more attention to one particular type of argument, maybe to make it look like a niche perspective is actually the mainstream perspective.
So again, there are some real questions, real tensions there around what are the ways in which where are the lines there? How do we think about what is manipulative versus what is networked activism? And that’s one of the real tensions right now. One of the real interesting areas in the space is we try to think about when everybody has access to megaphones, what are the ways in which what kind of guardrails are appropriate for a system, and how do we balance trade offs between things like moderation versus freedom of expression?
Jordan
One of the things I’d like to understand is how we can differentiate like a traditional political argument. Obviously, everybody on all sides of an election or a vote makes their case from something that’s been manipulated, something that’s been boosted to try and muddy the waters and change people’s minds. In that sense, do you get what I’m getting on? I’m probably asking this question badly.
Renee DiResta
Yeah, no, I think I do. We’re talking about persuasion, right? And so when you want to persuade someone or influence someone that’s not necessarily inherently bad, this is something that happens all day long. People do it constantly. There’s political persuasion, there’s marketing, there’s things that brands do. There’s a lot of different types, a lot of different manifestations of persuasion. And again, this is something that’s been true for decades, centuries, long before the Internet. And this question of what is good versus bad is a very thorny one because you don’t want to fall into a trap of saying that political party A is good and political party B is bad because of some sort of bias that you might have towards one party or another.
And so one of the things that we try to think about in our work when we’re doing something, looking at election integrity, for example, we are not looking at is candidate A making a false or misleading claim about candidate B. What we’re trying to understand is are there certain areas that are uniquely harmful, right. And harmful in a particular degree, that justifies the trade off between free expression and moderation tilting in favor of moderation. So let me explain that a little bit more clearly.
Jordan
Please do. And if you could give us an example of that, that would be great.
Renee DiResta
So when we talk about moderation, there are roughly kind of three buckets of moderation that platforms use. There is remove where the content or the account is taken down. So that’s the sort of most extreme form. There is reduce, where as we were kind of talking about earlier, there are these algorithms that are surfacing and curating what kind of content is going to be pushed into a feed. So when something is reduced, it is pushed into fewer feeds, it is throttled. And oftentimes these platforms are trying to figure out what it is and if it is manipulative in some way. And then the last bucket of moderation is inform. And that’s where you see some of the efforts to fact check. So a media partner might write a fact check. And so the content that has been moderated using the informed treatment is given a label or there’s an interstitial that you have to click on. Something just lets you know that perhaps something about the story is in dispute.
So oftentimes inform is, I tend to think of it in a way, almost as counter speech. The content is not bad enough to justify it coming down. It does not necessarily need to be reduced anymore. But with inform. They’re saying is here is this story that’s gone viral, but here’s a little bit more context for you so that you can be a more informed consumer of this information. These are the three buckets that we have remove, reduce and inform. One of the real challenges as we think about when to use these things is there a threshold that we should have to justify remove what kind of content should be subject to remove versus inform? What kind of content is so bad or so harmful that it should be taken down as opposed to contextualized for readers? And so platforms try to thread this needle and they write up these very long policies that detail what kinds of content might be subject to remove versus inform.
And so they’re not always once you have the policy they’re trying to lay out, these are the guardrails, these are the rules for using our system. But then having the rules is one thing, enforcing the rules as another. So sometimes there’s some enforcement that’s not necessarily as uniform as one might like. This oftentimes particularly in a political arena, leads to allegations that the platform is biased against one particular user type. In the U.S. that often takes the form of allegations of anti-conservative bias. And so one of the real challenges is trying to understand again, how do policies around remove, reduce and inform? How are they executed? How are they applied? And so this question of how should we think about what is harmful and what should be taken down versus labeled is a really kind of thorny matter of some debate right now.
Jordan
Who should be making those choices?
Renee DiResta
Well, unfortunately, right now the answer is largely it is the platforms that are making these choices. And again, there’s a trade off here which is nobody wants the government making those choices. Or most people in western democracies I should say, don’t want the government making those choices. Very much in the U.S. people don’t want the government making those choices because that would be perceived as the government making a decision about what speech can stay up or come down. At the same time, the platforms are not really accountable to anybody. As we’ve discussed, enforcement is not always uniform. And so particularly if a high profile political account receives an enforcement action, oftentimes their supporters will be outraged and will immediately process it as this is censorship against my political viewpoint. And so this leads to a lot of subsequent attention to the content that was perceived as a violation, kind of gaining a whole second life elsewhere on the Internet as people debate whether or not it was censored.
Unfortunately, however, again, doing nothing, the complete free for all also does not really create a very positive experience. There are some perverse incentives there. For example, if you knew that nothing was going to come down, perhaps being the most sensational writing the bat boy tabloid stuff is going to get your particular political party or candidate the most attention, right? So maybe it creates some incentives to just lie, to just put out things that are absolutely kind of wild lies or worse, again, as we noted, the intersection between information and trust and safety to just harass people with impunity, right? If you are willing to go really down and scrape the bottom of the barrel and just be absolutely terrible to all members of what you see is the opposite of your political tribe, then you create a kind of race to the bottom incentive where if none of that harassment is moderated, then you are almost incentivized. You’re creating an incentive to try to push the other people out of the conversation. So this is where that kind of free speech actually stifles the speech of another group of people. So how should the platforms think about that?
And one of the challenges that we have is the platforms internally, they’re kind of trying to do their thing. But this question of who oversees the platforms right now is one that is very much another matter of kind of heated debate. In the U.S. we’ve not really succeeded in any kind of regulation because while many people feel that unaccountable tech platform should not be the arbiters of what kind of speech stays up or comes down, what you have is an interesting dynamic where on the right there’s very deep concerns about anti-conservative bias and they want to see more speech stay up. And on the left there’s concern about harassment and hate speech and they want to see more things often come down. And they want to see particularly things related to health misinformation or election misinformation be dealt with more stringently. Whereas the right wants to see that as political speech and wants to leave it up.
And the reason there are partisan angles is there is real power that is conferred by having your side’s content go viral, your side’s stories go viral. Being able to capture attention, these are tools of real power. The stakes are quite high and that’s where that challenge really kind of plays itself out as both the right and the left in the U.S. are angry. But there’s no common ground around what a better system might look like. So this is where on the policy front, we do try as a nonpartisan entity to begin to think through what might be a path forward, what might be more acceptable way to think about moderation and trade offs against speech looking forward.
Jordan
Before we go, I’ll get you to explain maybe a couple of ways that that could happen. But first I wanted to ask you about the big picture and democracy because it’s interesting, you’ve mentioned that both the left and the right are angry for their own reasons. It’s much the same up here in Canada as well. What I wonder about, and often worry about, is if the big picture is just diminishing–trust as a whole in the process. We covered the January 6th hearings recently and obviously the proliferation of the big lie on social media contributes to, on both sides, just the lack of faith, the lack of belief that elections will be free and fair. And does that worry you more so than which side or how we should deal with censorship, just the ultimate sort of degrading of trust in the democratic process?
Renee DiResta
Yes, absolutely. And thanks for raising that. I feel like I got off on a bit of a tangent when I was trying to explain moderation in the context of harms. So let me go back to harms very quickly. So it is my opinion that when we talk about harms, candidate A saying something bad about candidate B has been part of political campaigning again since political campaigning happened. But the harm is actually that undermining of trust and confidence in the system. The harm is the delegitimization, in my opinion. And this is where when we think about harms, democracy is predicated upon the losing side, accepting the loss. And when you are constantly eroding confidence that the election was free and fair, that the election was legitimate, the outcome is authentic and is real, that I think it is that undermining that is actually the harm that we pay attention to.
So when we think about harms and when we think about where these trade offs should be, it’s our belief that there’s a great framework that Google uses, it’s called Your Money or Your Life. And it dates back to, I think, 2012. And it was this idea that when people are searching for information, there are certain areas in which the information has to be held to a higher standard. There’s constantly search engine gaming and people trying to get their links higher up so that they can sell more products and so on and so forth. But Google recognizes that in the context of health and in the context of finance your money or your life, you don’t want to be returning results based on what’s popular. You need to be sure that you’re returning things that are coming from authentic domains or sources that are reputable because you don’t want somebody with a cancer diagnosis to type in their cancer and get a whole bunch of stuff for like mushrooms and juice fast. You want them to be receiving good information.
Similarly finance. And now we’re at an interesting point where your Money or your Life is in ways kind of unexpectedly extended to the sort of future of your democratic government. And that’s where I think, again, at SIO, we are not the fact checking police. We are not out there looking for something that is false on the internet. What we are trying to understand is what are the dynamics of these narratives that lead to decreasing confidence and sort of fundamental social structures that really do have a profound impact on people’s lives.
Jordan
Renee, thank you so much for this. The last question I have for you is more of, I guess a theoretical one, but is this happening because humans and human behaviour just evolved so much more quickly than the institutions we need to govern them?
Renee DiResta
I don’t know if it’s necessarily human behaviour. I think there’s some interesting work by social scientists and psychologists that say that a lot of what is happening on the Internet is just kind of like people being people, right? And that sort of inclination towards trusting people who think like you or kind of self sorting into kind of very like minded communities. Again, this kind of predates the Internet. But one of the areas that I think is interesting are these questions of how do the structures that we create online exacerbate perhaps the worst tendencies of human behaviour? And are there ways to think about that? Are there ways to think about what are the as we think about curation and recommendation, as we think about engagement, as we think about suggesting who to follow? Are there ways to just recognize that there are certain tendencies to the way people behave in crowds and groups and to think about better systems, better designed systems that create perhaps healthier communities online?
And this is where you’re starting to see that work being done by newer kind of emergent entities into the market. Twitter has this thing called Blue Sky and it is asking the question, can you give tools for curating your own feed to the public? Is that something where instead of it being this attempt to create one overall curation structure, users have more granular control? Can you do that with moderation tools where users have the ability to say, I don’t want to see content that uses certain types of words or that comes from certain types of certain types of communities? And so are there ways in which that becomes more decentralized? What does the Internet look like as it continues to evolve? And are there ways to think about design and structure and more ways that we shape the system kind of from its conception on as opposed to moderation, which is more inherently reactive and looking at what is the bad stuff that comes out of the other end?
Jordan
Thank you again for this. I really feel like I have a clearer understanding, at least, of what’s happening in my feed and in my head.
Renee DiResta
Thank you for having me.
Jordan
Renee DiResta at the Stanford Internet Observatory. That was The Big Story. For more, head to thebigstorypodcast.ca. As always, talk to us on Twitter. Be nice at @TheBigStoryFPN. You can send us an old fashioned email that’s [click here!], or you can send us a really old fashioned phone call. I don’t even know if we should be taking phone calls during a week dedicated to the Internet, but we are. Phone number 416-935-5935. Listen to this podcast wherever you get your favourite podcasts. Share it with your favourite people. Thanks for listening. I’m Jordan Heath-Rawlings. We’ll talk tomorrow. Bye.
Back to top of page