Los Angeles Times offers an AI cautionary tale
Generative AI Initiative Blog | 31 March 2025
If you’ve been reading this newsletter over the past year, you will know I am constantly looking for innovative ways in which news organisations are using GenAI.
Often, the use cases simply present more efficient ways to do what we are already doing rather than anything brand new, so I am particularly intrigued when I see use cases for work that has never really been done before.
It is in that spirit, then, that I would like to tell you about the Los Angeles Times.
As you probably already know, this is a U.S. newspaper that has seen a great deal of upheaval and turnover in recent years. A few weeks ago, it introduced a new AI feature, Insights, which provided AI-generated “perspective” on content and appeared alongside its Opinion columns, editorials, commentary, and similar content that provides a point of view on a particular topic.
“The purpose of Insights is to offer readers an instantly accessible way to see a wide range of different AI-enabled perspectives alongside the positions presented in the article. I believe providing more varied viewpoints supports our journalistic mission and will help readers navigate the issues facing this nation,” Executive Chairman Patrick Soon-Shion wrote in a letter to readers.
You could argue there are many good reasons for doing this. News brands are trying hard to rebuild trust with readers — what better way than by transparently showing them that they are aware of different points of view on a topic? Most news organisations are also trying to reach new audiences — surely showing alternative perspectives could help them get beyond their core subscribers?
These AI-generated Insights were not reviewed by the newsroom. You can guess what happened next.
A columnist published an article about the white supremacist group the Ku Klux Klan. The AI-generated Insights came up with “different views on the topic.”

Surely playing down the KKK’s role as a violent group that targets Black Americans is abhorrent. The backlash was swift. The Times ended up pulling the feature from the column but leaving it intact on others, leading observers to speculate that they really did not understand the potential magnitude of the problem.
This AI application is problematic for other reasons as well.
The AI seems pro-AI and misrepresents articles critical of AI, and the sourcing (via AI engine Perplexity) is also dubious. Also, isn’t the whole point of an opinion column to provide you with a cogent argument supporting a particular point of view rather than to spell out all the possible perspectives on it?
It comes just months after the Los Angeles Times introduced a different AI feature (via news app Particle) that classifies where a piece of content falls on the political spectrum and flags that for readers as Left, Center Left, Center, Center Right, or Right.
This “bias meter” has been panned for lacking transparency on how it concluded which label ought to be applied — which is ironic because transparency is surely the point of the label.
In that vein, then, who is doing transparency and trust-building well?
You have already read here about Every, which uses GenAI to let readers interrogate the article to find out what was not included in an article.
Scandinavia’s Schibsted has introduced “ethics boxes,” titled “This is how we think,” to help readers understand why they made certain editorial decisions, such as naming a suspect in one news story and not in another. These boxes piggy-back off AI-generated fact boxes and summaries across the site.
Brazil’s Aos Fatos has built a GenAI fact-checking chat product that can be accessed on platforms such as WhatsApp and Telegram.
If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.