Speaker 1:
Frequency Podcast Network, stories that matter, podcasts that resonate.
Jordan:
Have you been on Facebook lately or perhaps a different way for me to ask the same question? Have you met shrimp? Jesus, yet, for those of you who are perhaps mercifully no longer spending time on the world’s biggest social media platform, I will elaborate shrimp. Jesus is, well, it’s Jesus as a shrimp, or at least as a part shrimp part man, and he is one of thousands of increasingly bizarre AI created images that are proliferating on their thanks to Facebook’s algorithm. Now, why is that happening? Well, it’s the same way anything goes viral. It’s thanks to engagement from other Facebook accounts, but who is engaging with this incredibly weird, totally obvious AI spam and why? And speaking of why should you care, especially if you’re not on Facebook at all, because most of these images are totally bizarre and obviously fake, but also a lot of them aren’t. And the demographic that uses Facebook perhaps more than anyone and likely includes some of your older loved ones, are the people least equipped to spot these fakes or the scams that might come with them.
I’m Jordan Heath-Rawlings. This is The Big Story. Jason Koebler is a co-founder at 404 Media. He also co-hosts the 404 Media podcast. He was previously the editor in chief at Motherboard. Hey, Jason.
Jason Koebler:
Hey, how are you?
Jordan:
I’m doing really well. I haven’t been on Facebook in quite a while.
Jason Koebler:
Well, you’re lucky, I hadn’t been on Facebook for a long time either until I realized what was going on there and now I’ve been rediscovering it as a platform. It’s a dark place.
Jordan:
We’re going to talk about all of that today, and I just thought maybe to begin to give people some context around what we’re looking at, it might help if you could briefly describe the dead internet theory and what it is, where it comes from.
Jason Koebler:
Yeah, so the debt internet theory is this idea that much of the activity on the internet is bots and spam and automated. And you can take this sort of very far, and some people, deeply conspiracy minded people say, well, how can I know anyone on the internet is real? It’s just me. And then a bunch of bots. The more accurate and sort of normal read of it is that there’s so much bot activity on the internet that a lot of the internet is just bots talking to other bots, sort of like content written for algorithms by algorithms to appease other algorithms and surface it either on Reddit or Google search or on social media. And it’s kind of hard to track down where this originally came from. It’s one of those internet memes that has sort of arisen over time. It’s called the Dead Internet Theory with all capital letters. But I think this is a feeling that many people have just from being a person who navigates an internet that feels increasingly fractured and depersonalized, I think.
Jordan:
And your premise of your reporting is that Facebook is a lot like that, but also significantly different. How so?
Jason Koebler:
Yeah, I think that my premise is that the dead internet theory is real, but I think that when people see AI spam and spam content in general, they’re very quick to dismiss it as just the dead internet. It’s become a meme on Reddit, on Twitter, on other social media platforms that when something AI generated or spammy goes viral, they just say, oh, dead internet. And on Facebook specifically, my feeling is that that is too reductive, that it’s more like a zombie internet where there is a lot of bot activity, but the bots are often interacting with other humans, and humans are wasting a lot of time talking to the bots. And it becomes this kind of weird soup where you can’t tell who is real and who is fake. And you also have this scenario where a lot of Facebook accounts that used to be human are now controlled by bots because Facebook is one of the older social media platforms. And frankly, a lot of people who used to have Facebook accounts have died or they have had their accounts hijacked or they have stopped using Facebook. They’ve sort of let their accounts go more or less fallow. And this is an opportunity for spammers, hackers, bad actors, et cetera.
Jordan:
So I mentioned off the top, I haven’t been on it in a long time, say for I guess some messenger stuff, for those who haven’t delved into it or who maybe again use Messenger to keep up with friends and family but aren’t typically scrolling their Facebook feed, what do these things look like now? Give us a sense of the experience someone might come across.
Jason Koebler:
Yeah, I mean, it’s not that much different from what it was like in 2016, which I say 2016 because that’s sort of when Donald Trump was elected and in the aftermath Facebook was blamed for fake news and Russian disin info and so on and so forth, and boosting things in the newsfeed. And I feel like that was sort of an inflection point where a lot of people left Facebook. That’s when I personally stopped using it so often except for, it’s funny you mentioned Messenger. I use it for Facebook marketplace to buy furniture for my apartment and stuff like that. I feel like people have taken bits and bobs of Facebook and have continued using those, but have stopped using the platform as a whole. But to answer your question, it’s a very complicated platform. Now you go on and there’s a newsfeed, but then there’s also pages, there’s groups, there’s Marketplace, there’s Messenger, and it’s a very bloated piece of software.
It’s really hard to navigate if you’re not using it every day and then sort of scrolling it. It’s still a very algorithmic feed as it always has been or has been for a long time. But Facebook has started this new thing called recommendations, which is its attempt to compete with TikTok, and essentially they’re showing you content that is popular on the platform that has nothing to do with any of your friends or anything that you’ve ever liked. This is a long way of saying that. I started seeing a lot of AI generated spam on Facebook, clearly AI generated images, and then when you clicked into the pages that were posting them, they were posting dozens and dozens and dozens of AI generated images in our, and when I say AI generated images, I mean very bizarre stuff. I think shrimp, Jesus is the one that has
Jordan:
Describe it for someone who’s not seen Shrimp Jesus.
Jason Koebler:
So it is an image of Jesus, sort of the stereotypical Jesus that probably many people imagine in their heads from paintings, white guy, long hair sandals. Exactly. But his arms are made of shrimps like the crustacean, and he’s sort of levitating underwater. And there are many, many variations of this. And these images keep going viral on Facebook, sometimes have tens of thousands of likes, hundreds of comments, and they just, Shrimp Jesus keeps going viral. A lot of images of sand sculptures that are clearly AI generated are going viral, these fantastical log cabins and nature scenes. And increasingly a lot of children holding cakes saying it’s their birthday with misspelled, they’re uncanny because they have sort of the very smooth edges of AI and deformed hands and things like this. They’re not trying to trick people in many cases. They’re just bizarre.
Jordan:
And before we get to sort of how people engage or people engage with these images, I guess maybe explain how you decided to investigate what the algorithm is doing with AI content.
Jason Koebler:
Yeah, so I’ve done now four or five stories about this, and the first one I think is probably a good place to start. There were a lot of images of wood carving going viral. And what I mean by wood carving is there’s this hobby where often with a chainsaw, people will take logs and carve a sculpture out of them. And this is very highly skilled work. There’s competitions, there’s artists who do it. An example is like this guy in the UK uses a chainsaw to carve dogs, and then he sells the dogs for thousands of dollars because it takes him weeks and weeks and weeks to make these. And he documents his process, and these images often go viral on Facebook and he uses as marketing. And back in December, a reader of our website told us that AI clones of his work were going viral on Facebook.
And there was a group on Facebook that was documenting how these images were spreading. And so essentially this guy in the UK would post images on Facebook and then someone we still don’t know who probably many different people considering how many I’ve seen now, are downloading these images, running them through what’s known as an image to image AI generator, which essentially takes one image, changes it slightly, and then spits another image out on the other side. And this is an automated AI tool. And then they’re taking these AI altered images and posting them on Facebook and pretending that it is their own work. And the reason that we could tell where the images were coming from is because often the details were very similar, like the images changed only slightly, and these images kept going viral over and over and over again. And there was a community of people on Facebook that were documenting all of the different pages that were posting them.
And so I talked to them, I interviewed them. There was this one woman in Australia actually who had created spreadsheets of the source image, the real image that she had found on the internet that was often posted months or years before AI was a thing. So these are images posted on the internet in like 2015, 2016, sometimes earlier, and were suddenly going viral. But they were bizarre and uncanny and sometimes had these weird AI generated artifacts where the person would have two heads or they would have extra mustache or something like that. And it was very crazy to just watch how this stuff was proliferating on the site. And so I did a story on that. The documentation that that community had created was extremely helpful just in the last six months since this started happening. It’s like we’ve gone from very realistic AI to just truly bizarre stuff.
Jordan:
Why are people doing this? Why would they want to take an account like that, edit photos slightly and repost AI images to what I can only assume are collections of bots slash people? What’s in it for them? How does this all work? Where’s the money here?
Jason Koebler:
That was the original thing that was driving me crazy is I didn’t know because a lot of the pages were posting the images and clearly there’s some value to having a Facebook page that has a lot of followers, and that gets a lot of engagement, and that can get content surfaced in the algorithm in front of people. But there was no call to action. They were not linking off platform, they weren’t selling anything. They had nothing to do with politics. It didn’t seem like it was a disinformation campaign or any of the things that we had seen on Facebook previously. But then we found these pages, and I say we, because I was following what the group was doing, and the group was made up of 30 or 40 people who were documenting all of this. And I think one of them found a page that had started linking out.
And so it was an AI generated image, and then in the comments it had a link to a website, and that website was also AI generated. It had just like nonsense text and nonsense image at the top. And the one thing that it did have on the site was tons and tons and tons of ads just assaulted with pop-up ads, like 30 or 40 ads on the page. And at that point I realized that, okay, this is a clickbait spam farm, more or less at least. Some of them are. They’re sending people off platform, they’re collecting pennies from the people who happen to click and they’re making money that way, which is a type of online inauthentic activity that I’m very used to. And then we were able to find other pages that were linking to drop shipping websites where they were selling dog leashes or baby onesies that had nonsense phrases on them more or less.
And so a lot of them were trying to sell products as well. And then the last thing I’ll mention is that there is a marketplace for a black market essentially for Facebook pages with huge followings. And so my theory and what seems to have borne out over time is that a lot of these pages were posting this AI spam to generate a following, and then at some point later they would either sell the page or pivot it to one of these scams where it’s collecting ad dollars or selling products or potentially doing disinfo. There’s the possibility that these could be abused at some point in the future.
Jordan:
So who is engaging with sharing and commenting on these images? And here, I mean not the ones that are slightly plausible, but the ones that you’re talking about like Shrimp Jesus and whatnot.
Jason Koebler:
So this is the very tricky thing is that I sort of categorize them into two buckets for myself. One is realistic AI, which is the wood carving stuff I was talking about, and then the others, the bizarre Shrimp Jesus. And in my mind, it seems like the people who are engaging with the realistic AI do not know and do not understand that it is AI and that these are real people who are commenting. And I say that because I’ve spent many hours clicking into the profiles of the people who are doing this, and the people who are commenting on the realistic AI, by and large have patterns of activity that I think are human. They are talking to other people, they’re responding, they’re referencing real life, like they’re talking about weddings that they went to or funerals that they went to with other people who sometimes have their last name or whatever suggesting that they’re family.
And this is a pattern of behavior that I’m very used to from just being on Facebook for a long time. This is how I used to use Facebook. You sometimes see arguments break out in the comments of these images. One that was very striking to me was an AI generated image of a wood deck in someone’s backyard, and people were fighting about whether the pillars and the planks were up to code essentially. And you had these people who are sort of fighting over this fake image and whether it complied with US law or state local law. And then you have the bazaar, the Shrimp Jesus AI, and it seems like almost all of the activity on this is either bots or people whose accounts have been hijacked or something like this. And I say that because often the comments are just amen, like one word over and over and over again. A string of emojis may be there’s very rarely sort of like any engagement with the content of the picture. And then on some of these pages, you can see the history of people commenting. You can see what one person is or one account is saying on all the images in a group, and you can see that they’re saying the exact same thing on all of the images, which suggests that this is bot behaviour.
Jordan:
In the big picture here, what’s the issue with this for regular people who presumably can still go about using their Facebook feed the way they want to use it? What’s the risk here for them or for the platform in general?
Jason Koebler:
To be honest with you, I find it personally offensive, more or less as a journalist who has at times relied on Facebook to get my work out there or to engage with other people. It’s like Meta and Facebook, the platform specifically have tweaked and changed their algorithm so many times over the years to make it very difficult to reach people and especially to reach people without paying to boost the content. And here you have people who are abusing the platform, posting hundreds of times a day who are able to make the content go viral over and over again using a mix of inauthentic behaviour and tricking people more or less. So there’s that, but I think that while a lot of the accounts that engage with this stuff are not real, the fact that these bots are able to engage and boost this content so that it shows up in the feeds of humans, it’s not good.
It’s disconcerting. You can imagine many different ways that it would be abused, and it also creates this scenario where Facebook is full of junk that is being posted by bots, boosted by bots and then shown to humans. It’s not a social network if what you’re seeing is AI generated spam boosted by bots, and that is what Facebook is moving toward. At the last quarterly earnings call, mark Zuckerberg said that 30% of what people are seeing on Facebook right now is through this recommendations algorithm, which is the mechanism through which this AI generated content is being shown to people. And he saw this as a huge opportunity because Facebook believes that people will stay on the platform longer through this recommendations platform. That means we’re going to see more and more and more of content recommended in this sort of way, and the people who are doing this spam have found a way to game the system so that what you see is junk and not human generated content that took effort and time to make.
Jordan:
Has Facebook responded to any of your reporting? Have you tried to ask them about the proliferation of particularly the bizarre stuff?
Jason Koebler:
Yeah, so I have talked to Facebook a couple times for some of my original reporting. They talked to me and originally one of the reasons that they talked to me was because some of these accounts that are posting images are stolen, the accounts themselves are stolen. Like I had identified a dog rescue in California that had their account hacked and this dog rescue relied on its Facebook page to reach potential foster parents and donors and people who were going to adopt pets, and their account was hacked, and then it was pivoted to this AI spam and the owner of that dog rescue had a really hard time reaching Facebook, getting their account back, so on and so forth. And Facebook sort of told me that after I reached out for comment that this is not allowed and they gave the account back. There was also this metal band that had their account stolen and then pivoted by the hacker to post this sort of thing.
So Facebook said, hey, this type of hacking and account stealing is not allowed. We don’t tolerate it. Although it’s very hard to get any sort of action from Facebook when this happens if you’re just a user and not a member of the press. But as far as the AI generated content goes, the content itself, Facebook has posted a few blog posts where they say that they are going to start labeling this stuff. They said that they were going to launch it in May. It’s May now, and they have rolled out a new tool where the person who uploads an image can label something as AI generated, but it’s voluntary and it’s not just bizarre stuff that has shown up on Facebook as AI spam. There’s a lot of grotesque stuff as well and things that in my opinion, violate Facebook’s own rules. There are a lot of deformed and mutilated children, a lot of violent amputee content.
I don’t really know how else to describe it and I don’t know why it is devolved to this point. There are a lot of accounts that just go more and more and more extreme, and the more extreme they go, the more engagement they get. I think that a lot of what’s happening is bot activity, but Facebook has not really been deleting this stuff proactively. It’s like sometimes they will take it down if we write an article about it. NPR did an article about it somewhat recently, and they took action on some content, but by and large, they seem to be okay with this stuff for the moment.
Jordan:
My last question is just what can ordinary people who, again, people like my mom who use Facebook to keep track of the family and see what her grandkids are up to, what can they do to protect themselves from this kind of onslaught?
Jason Koebler:
I mean, I think honestly, it’s sort of the sad fact. AI has lowered the bar to content creation. It’s like it is easier than ever to spam, and that is not going away. I think that’s one of the big threats and harms of AI that we’ve seen so far. It’s not that it’s better than human content, it’s that it’s way easier to make, it’s way easier to generate 50,000 images than it is to paint a single image. And so I think that it’s hard to signal to Facebook like, Hey, I don’t want to see this stuff, so don’t show it to me. It’s like AI is being integrated into all of the platforms that we use online. It’s being integrated into Twitter. It’s it being integrated into Google search. It’s being integrated into Facebook. Thankfully at the moment, it’s still relatively easy to identify AI generated images if you spend a lot of time looking at them. That’s what I do for my job.
Jordan:
But that’s probably not the average Facebook user.
Jason Koebler:
I don’t think it’s the average Facebook user. No, I think it would be helpful to try to educate yourself on what this sort of thing looks like. I do think that Facebook has an aging population and that it’s quite difficult to do that. The other thing I’ll say is that while I’ve been reporting these stories, I’ve talked to people in my life and I’ve posted on my own Facebook like, Hey, have you guys seen any of this AI generated content? Please send it to me if you have. And I was pretty alarmed because one, a lot of people did send me the same type of AI generated spam that I was talking about, but then a lot of people also sent me images that they thought were AI generated content that I knew were real just by doing research and figuring out where this image came from.
In some cases it was artists posting art that they had sort of painstakingly documented as creating. And I think that there is a real risk of people who have their guards up so much because they don’t want to be fooled by AI generated content that they are dismissing real content as AI generated. Either way, you have a situation where people can’t determine what is real and what is fake. I know that doesn’t answer the question of how to protect yourself, but I’m still kind of grappling with that myself.
I think that what I do in my day-to-day life is I don’t take anything I see on social media at face value unless it comes from someone or a specific news outlet that I trust. And I think that you kind of have to seek out sources of information that you trust and rely less and less on social media, which if we’re able to do properly, might not be such a bad thing, but it’s definitely a lot more work to specifically subscribe to different publications or podcasts and go directly to them versus having them fed to you via social media.
Jordan:
Jason, thank you so much for this and thank you for all the work you and the rest of the folks are doing.
Jason Koebler:
Thank you so much for having me.
Jordan:
Jason Koebler, co-founder of 404 Media co-host of the 404 Media podcast. If you want to hear more from him on the Facebook investigation.
That was The Big Story. For more from us, you can head to TheBigStorypodcast.ca. You can’t actually find us on Facebook because as you may know, Facebook blocks Canadian news and that’s us, so we won’t see you there. Don’t worry about that. If you want to give us feedback, you can write an email and send it to hello@TheBigStorypodcast.ca or you can pick up the phone and call 416-935-5935 and leave us a voicemail. Joe Fish is the lead producer of The Big Story. Robyn Simon is also a producer on this show. Marshall Whitsed was working on the show this week as a producer, and this episode was produced by our editorial assistant, Chloe Kim. Stefanie Phillips is our showrunner.
Mary Jubran is our digital editor. Diana Keay is our manager of business development. I am your host and the executive producer of this show, Jordan Heath-Rawlings, and together we make up the Frequency Podcast Network and we are a division of Rogers. Thanks for listening. We’ve got, again, a couple of treats for you in the feed this weekend, including an In This Economy episode about how to take a vacation when you can’t afford it. Surely we could all use that right now. Thanks for listening. Once again, we’ll be back with a fresh brand new episode of The Big Story on Monday, and we’ll talk then.
Back to top of page