A woman in a black jacket, holding a camera and standing on rocky mountain overlooking a city – via Rachel Martin on Unsplash
AI & Automation, Visual Journalism

The Visual AI Revolution That Isn't Yet (and Why That's a Good Thing)

The tools are everywhere. Image generators like Midjourney and Nano Banana have become household names among creatives. Video synthesis platforms like Sora can conjure footage of places that don't exist and moments that never happened. AI-assisted design is reshaping advertising, entertainment, and social media at a vertiginous pace. And yet, take a look at the web offerings of the world's major media organizations, and you'll find something surprising: a profession that has embraced automation in almost every other corner of its workflow remains deeply – and perhaps wisely – resistant to letting AI generate the images it uses to tell stories.

A Sector That Loves Technology – Up to a Point

It would be wrong to call journalism technophobic. The industry has been an eager adopter of automation and digital tools. Sports scores and financial data are routinely rendered into readable copy by algorithmic systems. Audience analytics platforms shape editorial decisions in real time. Newsletters, podcast production, and social media distribution are all managed by sophisticated software stacks. The back end of a modern news operation hums with technology. And it's been seven years since computer science and media expert Nicholas Diakopoulos published Automating the News. How Algorithms Are Rewriting the Media.

On the visual side, too, digital transformation has been sweeping. Photo editing, color correction, and cropping have been automated to a large extent. Metadata tagging and image search within archives increasingly rely on AI. Many outlets use tools like DataWrapper to create sophisticated charts and maps in just a couple of hours. A quick look at the NYT's Immersive section demonstrates how journalistic storytelling has benefited from 3D rendering and XR technologies. The infrastructure of visual journalism is, in many ways, already AI-adjacent. However, when it comes to the images and videos that accompany regular reports on politics, culture, and science – showing people, events, and places – the industry has largely held back from enlisting machines to generate them.

Cosmopolitan's Cover and the Road Not Taken

The most prominent early experiment with AI-generated imagery in a major magazine came in June 2022, when Cosmopolitan published what it called the first AI-generated magazine cover. Created using DALL-E 2 in collaboration with digital artist Karen X. Cheng, the image depicted a female astronaut on the surface of Mars, rendered in a synthwave style. The magazine framed it as a historic moment, a glimpse of a creative frontier. It was a striking image back then. And it generated considerable attention – partly admiring, partly skeptical.

But what's notable, looking back, is how few imitators it found. In the years since, AI cover images have not become a trend in mainstream publishing, let alone in news media. The experiment remained more of a conversation piece than a template. The moment didn't trigger the visual revolution many predicted.

A thorough web search (with or without AI assistance) yields only a handful of cases where newsrooms have used synthetic cover imagery – and reactions have rarely been warm. In 2025 pro photographer James Patrick criticized Brains Magazine and Soleish Magazine for going with fully synthetic covers. The editors did so in an intransparent fashion and apparently without good reason (well, both Brains and Soleish are paid content publications). Later that year, the Manila Bulletin used an AI-generated, comic-style picture of athletes to feature a story on the SEA Games – even though real images were available. It's hard to find more noteworthy examples that were publicly discussed within the last year.

Where AI Visuals Do Appear in News

That said, AI imagery continues to make inroads. Some outlets and individual journalists have begun using generative tools in specific, bounded ways – when the alternative is either nothing at all or something arguably worse.

One recurring use case is replacing generic stock photography. The problem of stock photos in journalism is well-documented: the images are often stilted, visually clichéd, and oddly disconnected from the stories they accompany. A piece about depression featuring a person staring blankly out a window on a rainy day, or a cybersecurity article illustrated with a figure in a black hoodie hunched over a glowing keyboard – these overused images have become meaningless, if not annoying. Some editors and designers have therefore begun experimenting with AI-generated alternatives for these illustrative slots, arguing that a purpose-built synthetic image is better than a generic stock photo. Pioneers include Slovakian newspaper Dennik N (where editors frequently use Midjourney for article illustrations and have openly discussed their strategy) as well as American news and opinion site The Bulwark (a medium that also embraces Midjourney imagery and is very transparent about it). In Germany, paper of record DIE ZEIT sometimes experiments with AI visuals, also in the form of more elaborate animations.

A second use case is illustrating or visually reconstructing stories where no images exist. Historical events before the age of photography, classified or inaccessible locations, sensitive subjects where no camera could ethically be present – these are all areas where AI generation offers something that was previously unavailable. DW is experimenting in this domain within the scope of the GenKI4Media project. One of the pilots is about converting text, photos, or audiovisual data into 3D virtual models of inaccessible real-world spaces for visualization. For instance, an investigative journalist might analyze satellite imagery to pinpoint a secret prison, then use an eyewitness's cell description to digitally reconstruct the room in 3D where no footage exists. The outcome of this ambitious project is still unclear. Hopefully, the use of generative AI tools will at some point speed up the process of 3D reconstruction, which is usually complex, time-consuming, and takes a full team (think of the NYT's famous piece How The Police Killed Breonna Taylor).

A third interesting use case, and arguably the supreme discipline in this context, would be to turn classic journalism, i.e. a long read, into a modern, compelling multimedia story via generative AI. This concept is still in its early stages. A rare, bonafide example is Jaemark Tordecilla's take on Sheila Coronel's "The Making of Edgar Matobato". The story, which offers an unflinching look at the Philippines' drug war through the confessions of a hired killer, is already powerful in plain text form. Jaemark took it to the next level by generating abstract images and animations, a voice-over, and a minimalist score with the help of AI. He also published an interesting workshop post that describes his process in detail.
So, some synthetic material has found its way into quality journalism – but the field of application is rather limited. There's no "visual GenAI revolution" in news reporting yet.\

Why the Reluctance Is Probably the Right Instinct

The reasons journalists have been cautious about heavy use of AI-generated imagery are not hard to find. And they're very defensible.

At the most fundamental level, photojournalism rests on the idea of witnessing. A picture says, in effect: this happened, someone was there, this is evidence. The moment that contract is broken, the moment images can be fabricated with plausible fidelity, the evidentiary value of all images becomes suspect. Journalism has already spent years grappling with manipulated and misleading photos; generative AI raises those stakes considerably. The concern is less about journalists generating fake images than about the broader erosion of trust in visual media that AI proliferation may cause, making even genuine photographs easier to dismiss.

There are also practical and ethical concerns about representation. AI image generators have well-documented biases – they reflect the skewed demographics of their training data, and they can produce images of people, places, and communities that are inaccurate or reductive. Using AI to illustrate a story about, say, a specific neighborhood or a particular cultural community risks compounding existing representational failures in media.

In line with all these considerations, Manuela Kasper-Claridge (our former editor in chief) wrote in 2023 that "DW is firmly committed to journalism that is produced by people" - and that the broadcaster sees "no value in publishing photorealistic images generated by AI." AP's generative AI guidelines permit limited experimentation, but examples of visual GenAI on the organization's main news websites seem to be well-hidden or non-existent.

In general, news journalism faces the issue of labeling and transparency, which adds another layer of complexity. When should readers be told an image is AI-generated? And how prominently? Different organizations have reached different conclusions, and as of yet, there's no industry-wide standard. The lack of consensus is itself a reason for caution.

Not Quite Hype

In almost every domain it has touched, AI has been accompanied by a wave of breathless predictions. In journalism, some evangelists fantasized about the full automation of reporting, the replacement of editors, and – on the visual side – the end of photography and illustration as professional endeavors.

The reality, at least so far, is considerably more modest. AI-generated imagery in quality journalism remains rare, bounded, and contested. It has not swept through newsrooms. It has not made humans redundant. The tools are there, the capabilities are impressive, and yet the industry has largely chosen not to use them in the ways that would be most disruptive.

Some of that restraint may be inertia. It may have to do with the slow pace at which norms evolve in a rather traditional profession. This is especially true for technologically more conservative countries like Germany.

However, a meaningful portion of the reluctance seems to reflect something that looks like genuine wisdom: a recognition that the things visual journalism is actually trying to do – bear witness, establish fact, hold a moment in time – are precisely the things that AI-generated images cannot do, almost by definition.

For illustrators and photographers who have watched other creative industries absorb wave after wave of AI disruption, the news industry's relative conservatism may be one of the more reassuring developments of the past few years. Needless to say, the forces reshaping media continue to squeeze image-driven reporting from all sides, from cutting edge smartphone cams to news influencers to shrinking budgets. But for the moment, visual GenAI has not displaced the human instinct for timing, truth, presence, or aesthetic appeal. Those qualities still give visual artists relevance and room to breathe.

Author
team_alexander_plaum.jpg
Alexander Plaum