New Reuters Institute report dives into “selective news avoidance,” personalisation

By Ariane Bernard


New York, Paris


The recent 2022 Digital News Report from the Reuters Institute at Oxford University highlights a growing trend of what it calls “selective news avoidance,” the phenomenon where users avoid news that will depress them or which they feel will just create arguments. 

This number is, on average across countries, 38% in 2022 versus 29% in 2017, with marked differences across markets. This is particularly interesting for us as data practitioners because, beyond the immediately problematic implications for better-informed societies, we know this is the kind of behaviour that personalisation algorithms will also take notice of and respond to, in their own way. 

Research on news avoidance from the latest Digital News Report.
Research on news avoidance from the latest Digital News Report.

Most personalisation algorithms are driven to emphasise things where they see positive (active) user response. This is, after all, the goal of personalisation — to give you more of what you want as a user.

As it stands, most personalisation algorithms will use click-through rate both as an element of training and as a tracked outcome. For example, it’s possible to have personalisation algorithms that use other dimensions, but that’s not likely to be prevalent. 

So with users already telling us they are less likely to engage with hard news, personalisation algorithms will further depress the ability of this kind of news to be discovered in highly mediated spaces like social media news feeds. 

While nobody knows Facebook’s secret sauce except Facebook, there is definitely a factor in the Facebook personalisation algorithm that looks at baseline engagements for a page’s post to decide whether a post is under-performing or over-performing and further boosts posts that over-perform — which is why babies and wedding posts always win at Facebook.

Indeed, the new Reuters Institute report notes that “those who often avoid the news are twice as likely to say they see too much news on both Facebook and Twitter when compared with the average user.” 

Now, media companies are also compounding this problem with their own bias in selection. In a paper released in late 2021, University of Antwerp researcher Kenza Lamot looked at the headlines that various Belgian news media were promoting on Facebook and found the media was already “softening” its selected promotion relative to its actual production. The media, knowing that hard news doesn’t “reach” as much, doesn’t throw it in Facebook as much. 

The report finds TikTok's personalisation algorithms are different than those of other platforms.
The report finds TikTok's personalisation algorithms are different than those of other platforms.

An interesting tidbit in the Reuters Institute report pushed against the narrative that personalisation algorithms can only be the enemy of hard news as it spotlighted the growth of TikTok as a news source for a younger generation.

What is interesting here is that TikTok’s personalisation algorithms are somewhat different than Facebook’s in that the source of a piece of content is a very light factor in its discovery logic. 

I am basing this observation on The New York Times articles from a few months ago, which leaned on a leaked internal explainer for the algorithm. However, this leaked explainer is not to be found on the Internet. I went down some proper rabbit holes to look for it and couldn’t find it.

With TikTok, if a creator — even an unknown creator followed by few people — produces an engaging piece of content that people actually end up watching (for example, they get hooked in their first few seconds and keep playing), a viewing user may very well end up being shown news and content outside of their filter bubbles because the main dimension TikTok is looking at is the clip rather than the source. 

Says a 22-year-old female in the United States: “It’s so addictive … and where it lacks in trustworthiness, it excels in presentation. It’s a news source I end up consuming because I’m also often scrolling TikTok for other reasons, but the algorithm ends up providing news anyways.”

This leaves us:  

  • With some added responsibilities to write responsible personalisation algorithms that have news diet diversity as a dimension to defend in their output, even if it is at the expense of some algorithmic performance. TikTok says they address this in their algorithm.
  • Or using several different personalisation algorithms on users, where filtering isn’t so personal that diverse content can’t enter the race for consideration (this is more the TikTok model). 

Much like a parent who is unsuccessfully introducing broccoli to their toddler, they will offer broccoli again at a later time. And other types of green vegetables. Even if the candy reliably gets selected, it doesn’t mean the personalisation algorithm can’t be written to selectively put broccoli back on the menu again and again.

If you’d like to subscribe to my bi-weekly newsletter, INMA members can do so here.

About Ariane Bernard

By continuing to browse or by clicking “ACCEPT,” you agree to the storing of cookies on your device to enhance your site experience. To learn more about how we use cookies, please see our privacy policy.