Joe Fish
Scrolling through social media can often feel like wading through a waist deep river of s***. And I say that as someone who has never had to face the level of harassment that journalists, particularly female journalists, and even more so, female journalists of colour, deal with on an almost daily basis. So the question becomes, how do we protect ourselves from this harassment and abuse? Is this sort of vitriol just an unavoidable consequence of living a connected life? Or is there a way that we can rest control back from the opaque algorithms that curate our feeds and give ourselves autonomy over our experience online? Our guests today, after becoming the victim of online harassment several times, herself, decided to create a tool that empowers users to do just that. So how does it work? How do they ensure that it silences harassment compared to legitimate dissent? And could this be the first step towards creating a safer, more equitable internet?
My name is Joe Fish and I’m filling in for Jordan Heath-Rawlings. This is The Big Story. Tracy Chou is a software engineer and an advocate for greater diversity in the tech industry. She’s also the founder and CEO of Block Party. Hi, Tracy.
Tracy Chou
Hi, Joe. Thanks for having me.
Joe
No problem at all. So, just to start, I want to sort of zoom out and look at the issue we’re discussing today from sort of a macro lens. So when it comes to online harassment, and I know that that’s a very broad term and I think we’ll sort of hone in and define it more precisely in just a moment, but when it comes to online harassment, do we have any idea about the sort of scale of the problem? And how would you even begin to go about quantifying something like that?
Tracy Chou
Yeah, so no one is doing this research on a global scale through all the platforms to fully understand it, but there are some studies in the U.S. we can look at. Pew research has been releasing a few reports of online harassment. The most recent one was in January 2021. That said, 40% of adults in the U.S. have experienced online harassment. The prior reports in 2017, there’s a trend line that’s going upwards and the severity is also increasing, which is very unfortunate. There are some numbers that show that it’s higher for women and people of colour, minorities. But it is very difficult to quantify this partly as you identify, because what is online harassment or what is harassment is difficult to define and it’s really hard to look at it on a global scale across all platforms.
Joe
Right, understood. But when you think about online harassment, what are some of the things that sort of immediately come to mind for you?
Tracy Chou
Yeah, there’s a whole range for a couple of different frameworks you can use to look at it. One, is there’s a stuff that’s very high prevalence but lower severity. So think about drive-by trolling, random people coming to comment on your posts, sometimes it will be flavours of sexism and racism, that kind of thing. And then you have much more intense targeted harassment, whether it’s one person who is very keen on bothering you or sometimes coordinated attacks. And so you’ll sometimes see groups coordinating on forums, sometimes places like Reddit or 4chan when they’re coming together and then coordinating an attack on somebody.
Joe
Right, and that could be, I mean, obviously horrifying for the individual afflicted. And I’m just wondering how did this issue sort of first enter your rate? When did it go from being something that was sort of a passion or something you were passionate about, to something you eventually decided to dedicate your entire professional life to?
Tracy Chou
It started for me just generally being online, and so the earliest experiences I have with online hate were even back in high school, so more than 15 years ago, but most acutely and sort of like a personal professional capacity was when I first joined Quora as a software engineer, the question answer platform. As the second engineer hired on to the team, the first thing I built was the block button because somebody was already asking me on the site, even though we only had a few thousand users at that time, this was in 2010, but it did escalate tremendously, as over the years I’ve done quite a bit of diversity and inclusion activism and built more of an online presence, particularly on Twitter. And so all those examples I was describing earlier, from the sort of like low grade trolling, sexist, misogynistic content to the targeted harassment and coordinated tests, I’ve experienced all of those in the course of doing this activism work around diversity.
And there were a couple of instances in 2018, not too long before I started Block Party, where I was dealing with these attacks and trying to report them to the platforms, Twitter and Instagram and just getting the reports turned back with this is not harassed and we’re not going to do anything. And so just using the normal channels that people try to report, I wasn’t able to get any kind of action. And because I’ve been in Silicon Valley for a long time and have a lot of friends who work at the platforms, when I took screenshots and shared them, I have friends and followers who worked in the platforms, escalated internally and got those accounts taken down. And that actually made me more upset that I could get this privileged access, like I could escalate through friends at these companies who could say, oh, we’ve dedicated somebody on trust and safety to make sure your case is handled where the average user can’t get that. And it felt so unfair to have this access. I would much rather be able to go through normal channels and know that the system is working. And so building block party. It fell like trying to bring that special act that I had to solve these problems more generally so that everyone can feel safe.
Joe
You know, what was sort of interesting to me in reading your most recent or one of your most recent blog posts was you mentioned the ability to tag someone in a post or photo online. And to me it’s like sort of amid the sea of toxicity that you encounter on these social platforms that function this tagging function always kind of seemed relatively innocuous in my eyes. But you make a pretty compelling argument why I might be wrong about that. Can you sort of walk me through that?
Tracy Chou
So I think a lot of these tech platforms are built in this way originally, where it seems like these features are very innocuous and they’re good things. So you can get alerted when a photo of you is posted or somebody is mentioning you, but always also become potential of use in the tech vectors. So when somebody is tagged, they get a notification no matter what the photo or the comment is. And so the person who’s posting that has a way to force their presence and commentary on that person. If you imagine with photos it could be an abusive photo, it doesn’t actually have to be a photo of that person that they’re getting tagged in. I’ve experienced this, where somebody photoshopped photos of me and then tagged me in them so I was getting notifications to go look at them.
And then even apart from the specific content, the volume of it can be really terrible if you imagine getting bombarded with hundreds of thousands or more negative comments. And they’re not even that negative. It could just be like these things that happen sometimes when somebody becomes the main character on Twitter and everybody has an opinion and it might just be mildly judgmental or somebody commenting on the situation. But when you get this huge wave of comments people unintentionally piling on, it can feel very terrible and overwhelming.
Joe
Can you think of any other design features of these social media platforms that on their face kind of seem relatively benign but then can be sort of utilized by nefarious actors? Like are there any other than tagging?
Tracy Chou
Anything can be abused, unfortunately, but a few that come to mind. This is maybe less direct way, but a lot of these platforms have prompts for you to share or post and they want this engagement. That’s the metrics that they’re aiming for. But when you prompt people to post kind of thoughtlessly because you put a big box in front of them and then have all these cues for them to post something, it encourages thoughtless comments. People like shooting off replies without really thinking them through and that’s not always great for online discourse. Replies to stories on a lot of platforms go directly into DMs. So on Instagram, if I post a story and people want to reply to it or hard it will go into my DM request which is not awesome. Sometimes I don’t want to have a whole bunch of DMs from people who are just commenting on something.
Even things like safety features can have these kinds of trade offs. So Twitter recently introduced the ability to turn off replies to tweets, which is a good anti-harassment protection. But this space and replies is also where people do fact checking or responses to things. So when that space is gone, then the sort of misinformation or harassment that can proliferate from the original post is unchecked. Things like the reporting function originally introduced you to report a bad actor that has been weaponized in many cases where people will then coordinate with and take down an account by reporting it a whole bunch. So the lesson is essentially anything you build can probably be abused.
Another example of the harassment side is like Facebook pages where you might want to disallow certain words from being posted because people are using similar insults or insulting words. But then that can be used for censorship if you don’t want people to bring up certain issues, particularly political ones. So it’s just always a balance of these different values and things that you want to support.
Joe
I mean these things that you’re describing that can be sort of weaponized against people. They seem like such almost like fundamental building blocks. It almost seems like the ability to harass people is almost written into the DNA of some of these social platforms, which to me makes it seem like a really daunting task to try and actually instigate change and build a better, safer internet for people. But that’s kind of exactly what you’re trying to do with Block Party and with your advocacy. And I’m just wondering what to you, does that better internet look like? How do we get there or at least begin the journey to getting there?
Tracy Chou
Yeah, I think there’s a couple of ways to look at this. One is from the individual perspective and from that perspective it’s as a person who is going online, you should be in control of your experience. You should have the ability to set your own boundaries around who you want to interact with and what types of content you want to see. And right now we don’t have as much of that. So that’s like one big factor that we’re working towards. From the ecosystem perspective, this better internet where we want to build towards has less harassment, less toxicity, less noise, kind of achieved that promise that we were sold originally about the internet where when you democratize access, democratize content, democratize information, all that stuff, it’s supposed to be really good. And so it really is about achieving that ecosystem where the fact that we are unconstrained by geography and physics anymore, it should be a good thing that you can connect with people around the world and be inspired and get all the information they have that democratization. It’s not easy to get there, but I think even starting from the individual perspective of being able to set stronger controls so you can continue to participate will have an effect on the ecosystem as well.
Joe
In terms of that greater autonomy, I know you’ve spoken a lot about, I guess it would be considered a class of software called middleware, which sort of acts as, I guess, an intermediary between you and the platform and allows you greater access. Can you just explain to me what middleware is and how some of it works?
Tracy Chou
It’s this idea of a layer of tooling that’s between the users and platforms so they can have more control over their experiences, what content they’re seeing, what interactions they have. Block Party is an example of this type of middleware where if you imagine the default behaviour of platforms is that you get notifications every time you are tagged and something, Block Party changes that so you don’t get notified every time you’re tagged. You can filter those mentions and notifications based on the criteria that you’ve set and how you want to interact.
There are other ways that this might look in terms of home feed and timeline curation could be, let’s say you only want to look at news sources that are more in the middle politically and a bit more vetted. You could say, I only want to look at these certain types of news sources. If there’s scientific papers coming in, I want those ones to be peer reviewed. You could imagine, even for kids, if in the future, Disney were to implement some kind of, like, home feed algorithm, they have a different set of choices around what types of content to promote that are more kid friendly. It would be possible to then say, I want Disney’s choice of home feed for me. So there’s a huge base of possibility here when it’s no longer the platforms deciding what your experience is going to be. And oftentimes with platforms, they are going to pick the sort of average best, the thing that kind of makes sense over the entire population versus what you actually want as an individual.
Joe
And how have platforms reacted to this? Are they supportive of these sort of third party software solutions or are they resistant?
Tracy Chou
Yes. So with Block Party, we’re working very closely with Twitter and they’ve been leaning very hard in this direction, like decentralization and giving people much more choice. So they’re very supportive and this is their vision for the future, which is awesome. They want to encourage a thriving ecosystem of other developers to build out these different experiences for users. Do you see some platforms, like Discord and Twitch also that are opening up these API or application programming interfaces so that more people can also be building solutions. I think we are starting to move in this direction, which is great.
There are some companies I think are a bit further behind in thinking about this and a little bit more resistant to giving up that control. But when I speak to people on these platforms, I think internally they’re starting to come around and realize that it may actually be better to give up some of this control and allow other developers to be building solutions.
Joe
Right. Well, I mean, I’ve read so much about these sort of content moderation farms at places like Facebook, or I guess Meta now. And I suppose for them it represents an opportunity because they wouldn’t have to pay these thousands of people to manually go through and flag problematic content, but rather offload some of that responsibility onto services like yours and the user themselves to sort of curate their own feed. Is that fair?
Tracy Chou
Yeah, that’s right. With platforms, when they want to assert that total control, then the burden is also very high for them to maintain this standard across the entire platform and make sure it’s enforced. And that’s where these content moderation farms are coming in, where they’re trying very hard as a platform to make sure that everything conforms to the standard, but changing that paradigm so that it’s not about trying to enforce the exact same standard across the entire platform, but giving users more choice, actually relieves some of that burden on the platforms. And it’s better for the individual users as well, where what they’re seeing can be much more contextualized to them, to their communities and those different contexts.
Joe
Right. And you’ve sort of touched on this delicate balance that needs to be struck between preventing harassment or discriminatory content from coming through and also the censorship of legitimate ideas, which these things can also be weaponized to do. And we’ve talked on the show a lot in the past about so called filter bubbles where people sort of end up in these spaces on the Internet, where they don’t see dissenting opinions and they end up in these very closed kind of ecosystems. Do you ever worry that giving people the ability to filter what they see to this degree could kind of reinforce those filter bubbles? Does that ever cross your mind?
Tracy Chou
It’s definitely something we think about. The first thing I want to point out is there’s general research that shows that people who are online actually get exposed to more diverse information than those who aren’t. So this phenomenal filter bubbles can happen, but it’s not necessarily writ large across the entire Internet right now. But the big danger of filter bubbles is when people are exposed to them without realizing that that’s what’s happening and they think that all the information they’re getting is sort of like reflective the totality of reality.
The algorithmic adaptations that happens to the way that the homepage algorithm on Facebook responds to you in the way that it gives people what they want. It will often reinforce people’s existing opinions, but if you ask people what they want, they don’t usually say that they only want things that reinforce their opinions, right. And sort of like my experience, I think about going on Facebook. It tends to show me stories from tabloids, and sometimes I can’t help but click on them. I would never buy a tabloid off the magazine rack at a grocery store, I see all these tabloids, I’m not going to buy one of those. If I were to buy something, I would buy a copy of The New York Times or The Washington Post if I wanted news. But you can’t get that kind of choice now with these algorithms to determine everything you’re going to see online.
With middleware and being able to choose, it actually makes it more possible to get the experience you want. Going back to the way it was before the Internet, where people would have much more choice over what media they would consume or not, we can get back to that. The way that Block Party works right now in a different way also helps us where we’re cutting out the lowest quality content, the trolls, the bots, and the noise, and it makes the platform more usable. You can actually see different opinions and engage with a wider variety of users and content.
Joe
And these filtration algorithms that you’ve built, how effective actually are they in preventing unwanted or harmful content from reaching people? And to what extent does there need to be a human moderator who remains in the loop and kind of oversees their function?
Tracy Chou
Yeah, the experience of users were on Block Party right now where we’re using actually sometimes a very simple heuristics to filter out users. Heuristics like, does this person have a very newly created account? Do they have fewer than 100 followers? It actually works really well. The feedback we get from folks is like, oh, I can actually use Twitter, and it’s not noisy and difficult anymore. So sometimes simple heuristics work great. These things are never going to be perfect. And whether it’s heuristics or machine learning algorithms or other things that we’re kind of putting in there to try to be smart about filtering, there will always be mistakes. And my personal opinion is that it is very important to have humans in the loop and constantly assessing what is happening with the technology we’re using, identifying any changes in behaviour as well, and any adaptations we need to make. There’s a lot of things that are going to need nuance, context, and understanding.
One of the difficulties with very technical solutions that they’ll miss out on a lot of sort of like societal context or cultural context. And so you need humans to look at this stuff. One example that I’ve seen is people trying to send racist comments to me in the form of like, don’t accidentally eat your dog. And it’s referencing obviously, this racist trope about Koreans eating dogs. I’m not Korean, but the racists don’t really care. They’re just kind of like tagging along to these racist tropes.
Joe
Can these filters filter out according to IQ? Do they have that function?
Tracy Chou
Working on that. Working on it.
Joe
So on top of these software solutions, you’ve also talked a lot in the past about how better regulatory frameworks can help with the issue of online harassment. And I guess to that end, the government also has a big role to play, and the government has a pretty bad track record when it comes to actually understanding the issue, among other things. In terms of policy, what are some changes that you would like to see lawmakers advocate for that might have a sort of immediate impact on this problem?
Tracy Chou
The thing that regulators can actually legislate that would be very helpful around improving social media platforms is having them open up so it looks like more transparency, more interoperability, more ability for third parties and other developers to be building solutions on top of the platforms. Like what we’re trying to get to, it’s very difficult for a regulator to say like, no harassment or no misinformation, because how do you even define those things? How do you enforce it? But from the technical perspective, forcing platforms to open up is something that you can require. You must allow people to be able to choose the algorithm that determines what they’re seeing in their timeline.
So the way this is on Twitter right now is they’ve actually implemented it within the platform. You can look at the ranked timeline or you can look at the chronological timeline. So the ranked one is Twitter making some decisions around what is more important to show you. And chronological is just based on when things are posted, we show them to you in that order. You can imagine other algorithms that could reorder what you’re seeing. And what would be possible to legislate is Twitter or whatever. Other platforms like must open it up so that it’s possible for other people to choose it. Or if you could imagine on Facebook right now, they have no option for you to go chronological. You could imagine the platforms being forced to then open up so that it is possible for somebody else to implement a chronological feed.
Joe
I mean, that sounds nice to me. I would like a few less articles about how we know that Bill Gates is a lizard person. That sounds I would actually appreciate that greatly. So what I want to end with here is clearly you’re hard at work on trying to tackle this issue. There’s other people also advocating for these changes, but in the near term, if you’re somebody operating online, especially if you’re a marginalized person, or if you’re involved in any sort of advocacy that leaves you vulnerable to things like trolling or just online harassment in general, do you have any tips for how you can keep yourself safe from harassment or online abuse?
Tracy Chou
Yes, there aren’t any perfect solutions around keeping yourself safe fully online, but one key thing is know that you can proactively set your own boundaries. And it’s okay to liberally use the mute and block buttons. People aren’t entitled to your time or attention, so it’s fine to establish those boundaries. There are third party tools, so it’s no longer you either have to suffer or wait for platforms to improve. Like, there are increasingly more solutions in a growing safety tech ecosystem, and this is more just sort of like general advice around engaging with technology. It’s fine to take a step back when you need to, and be aware of what the impact is on you. One big thing for me was realizing that seeing harmful content in periods of attack was really damaging for my mental health, and it was important for me to take a step back. And then things do calm down at some point that I could come back and protect my mental health at the same time. It didn’t mean that I had to step away forever, but it’s just trying to be mindful of what that impact was on me.
Joe
Right. Sounds like sage advice. Tracy, thank you so much for giving us your time today.
Tracy Chou
Thank you so much for having me.
Joe
Tracy Chou, the founder and CEO of Block Party. That was The Big Story. For more from us, you can head to thebigstorypodcast.ca. You can find us on Twitter at @TheBigStoryFPN. You can also call us and leave us a voicemail. The number is 416-935-5935. This show is also available wherever you get your podcast. We’d really appreciate appreciate it if you found us there and left us a positive review.
My name is Joe Fish, and on Monday, Jordan will be back in his rightful spot in the host seat. Have a nice weekend.
Back to top of page