In late July, near Peachland, British Columbia, igniting a blaze that raced up a dry hillside faster than firefighters could follow, eventually prompting the closures of two highways and evacuations of 400 homes.
Midway through the summer fire season, photos of what was dubbed the  quickly popped up online, showing plumes of dull grey smoke rising from the dry scrubland of the Okanagan Valley. But one image, posted on the Facebook page of a self-described “digital creator” the next day, claimed to show the “OUT OF CONTROL” fire in technicolor. The it defies belief — the fluorescent orange flames throwing up plumes of coal-black smoke as tiny helicopters and water bombers whiz overhead.
The image is lush, bright — and fake.Â

The B.C. Wildfire Service took the rare move this week of warning the public that an image posted online claiming to be of the Drought Hill fire in the Okanagan Valley was not real, but had been generated by AI.
FacebookIn a rare move, specifically called out AI-generated images this week — citing this image and another purporting to be of a fire near Bear Creek  — for spurring fear and anxiety by spreading incorrect information about the location and behaviour of the very real fires it is battling. “When misinformation is spread online, it can start to take root,” says provincial information officer Sarah Budd.Â
“Obviously it’s frustrating for our staff. We work really hard to quickly and efficiently get accurate information out.”
Dramatic pictures have a real psychological effect on people, says Ali Asgary, a York University professor of disaster and emergency management. The concern with overly dramatizing or sensationalizing images is that they might spark panic or unnecessarily paralyze people with indecision, he says. He stresses that generative AI isn’t inherently bad — it’s already showing serious potential for helping those who oversee disaster response sift through the flood of data that gushes in during emergency situations, and could even be used for spreading educational images or graphics that have been verified by professionals, he says.
But he also warns that there’s a real danger in letting just anyone pump out information of dubious origin. For one thing, verifying bad information wastes resources that are already stretched thin in an emergency situation. Then there’s the downstream effect of sowing doubt. “Over time, if a lot of this is happening, it creates confusion for people,” he says. “They don’t really know which sources to follow, which sources to trust.
“People start losing trust in their agencies and people who are supposed to provide information to them.”Â
The image was first posted on an account with the name “Joema Sombero,” a “digital creator” with 43,000 followers, on July 31, when the fire had been burning for more than a day. Alongside the dramatic AI-generated photo was an AI-generated caption that described the fire as “fast-moving” and “human-caused” and stressed the “massive aerial attack underway” to fight it.
“Sombero” then commented underneath: “Stay safe everyone, and huge respect to the firefighters battling this non-stop!” he wrote, adding a praying hands emoji and the hashtag #PrayforBC.
Many of the comments were furious. “Why are you using AI pictures and AI write ups to sensationalize Canadian Wildfires? Please stop. It’s not helpful,” reads one of the first. “Sombero” did not respond to requests for an interview the Star sent to the Facebook and associated Instagram accounts, but did hop back into the comments to explain their actions.Â
“The images and write-ups I share are AI-generated for illustrative purposes only and are always tagged with a disclaimer,” they wrote to the commenter. (When the image was first posted it did not include an AI disclaimer, according to the post’s edit history on Facebook; a day later, one was added.)Â “My goal is never to sensationalize but to raise awareness about the severity of these events.”
In another comment, they say its necessary to have more eye-catching images to help people “pay attention in a sea of scrolling.”Â
In some ways, the account mirrors the trajectory of popular AI use. It came to life in the spring of 2023 and initially posted mostly nature shots that appear to be regular photographs, mostly of mountains and lakes in the Canadian Rockies and Calgary area. The poster seems to discover generative AI around early 2024, and began experimenting with generic Canadiana content — northern lights, mountain ranges, people wearing plaid — that displays the slightly cartoonish look of earlier AI models. A well-received post during this time period got maybe a few dozen likes or heart emojis.
But earlier this year, the account appeared to zero in on natural disaster content and begins posting increasingly realistic AI-generated images of forest fires and volcanoes and in one case, a grizzly bear attack. The AI-generated captions got longer, more dramatic, more emoji-studded.
Fires are a major focus. There’s a AI image of the fires in La Loche, Saskatchewan and fake images of people who evacuated ahead of . (“The aftermath … is heartbreaking,” the caption notes.) But its earthquakes that seem to be particularly enticing to social media eyeballs. This week, about how “THE EARTH’S CRUST WON’T STAY QUIET” got more than 1,000 interactions, as did about “SHAKING” in Southern California. Then a big win. A single post about the number of earthquakes worldwide in the last month — “EARTH IS SHAKING HARD” — saw more than 6,000 people hit the react button.Â
A short video posted to the account features a cascade of happy-looking emojis with a celebratory title: “I got over 6,500 reactions on one of my posts last week! Thanks everyone for your support!”Â
Another term for the flow of fake imagery being churned out by the account here is AI slop, says Lauren Dwyer, an associate professor at Mount Royal University who studies emerging technologies and how they influence behaviour. “It’s just AI-generated images for the sake of AI-generated images,” she says.
Even the poster’s responses to commenters have the fawning tone of ChatGPT, she points out. (“I hear your frustration,” he assures one person upset about his AI use.) It’s possible this is one person letting generative AI run amok, she says, but it’s also possible the account is run by bots. Either way, these accounts are often set up less to spread reliable information than to generate clicks.
Much like how snack foods are engineered to keep you eating, AI is also trained to ensnare your attention. “AI creates really, really eye-catching images,” Dwyer says. Getting that balance of lighting and drama, all of the aspects that go into incredible photo journalism? To be able to do that with a click, as opposed to having to find the right angle and navigate wildfire smoke, it just removes all the barriers.”Â
The gold rush to get social media engagement is why AI-images of everything from natural disasters to celebrities behaving badly to weirdly unappealing recipes have flooded social media. allows accounts with a large number of followers to make money through ads in videos and subscribers. It’s not clear if this is the page’s goal — it currently appears to have two subscribers paying $1.29 a month, one of whom is a ɫɫÀ² Star reporter — but it provides a window into the incentives at play.Â
“It’s a business,” Dwyer says. “Everyone who is doing it is taking up space in a media landscape and trying to find their niche.
“Like, it’s about trying to make money, and it doesn’t really matter how that gets done.”
Error! Sorry, there was an error processing your request.
There was a problem with the recaptcha. Please try again.
You may unsubscribe at any time. By signing up, you agree to our and . This site is protected by reCAPTCHA and the Google and apply.
Want more of the latest from us? Sign up for more at our newsletter page.
To join the conversation set a first and last name in your user profile.
Sign in or register for free to join the Conversation