Community Protection

Voller Verdict: Are You Still at Risk?

Published on

March 5, 2026

The Voller Verdict Changed Everything - But Are Australian News Publishers Still Getting Moderation Wrong?

In 2021, the High Court of Australia handed down a ruling that changed the rules for every news publisher with a social media presence. The Voller decision established that media companies are liable as publishers for third-party comments posted on their Facebook pages. It landed like a thunderclap across Australian and New Zealand newsrooms, and the response was swift. Comment sections went dark. Pages were locked down. Engagement was sacrificed in the name of legal caution.

The legal risk is real. But the way most publishers have responded has created a different kind of damage - one that's quieter, slower, and just as serious.

Shutting Down Comments Isn't a Strategy

Disabling your comment section is a panic response dressed up as a decision. It feels decisive in the moment, but it sidesteps the actual problem rather than solving it.

Here's the irony: news publishers depend on audience trust and community engagement to survive commercially. Removing comments sends a clear signal to your readers - we don't trust you enough to let you speak. That's a difficult message to walk back.

The strategic stakes are high. For the first time, more Australians now access news via social media than through traditional outlets. Social conversation is not a supplementary channel - it is where your audience lives. Abandoning comment engagement in that environment is not caution. It's ceding ground you cannot afford to lose.

The real problem with the post-Voller response is that it presents a false binary: open comments with full liability exposure, or no comments at all. There is a third option, and most publishers haven't taken it seriously enough.

The Tool That Creates More Work Than It Saves

Some publishers did invest in moderation tools. Not all of those investments paid off - and a poorly configured moderation tool can be worse than no tool at all.

Blunt keyword-filter systems are the most common offender. They flag legitimate comments, frustrate genuine readers, and generate a backlog of false positives that your human moderators then have to review and reverse. The tool was supposed to reduce workload. Instead, it created a second layer of manual review on top of the first.

The problem runs deeper than efficiency. Consider the linguistic reality of Australian and New Zealand audiences. Colloquial swearing that is completely unremarkable in everyday ANZ conversation gets flagged as harmful content, silencing authentic audience voices. Meanwhile, coded language - phrases like 'the usual suspects' that carry discriminatory undertones - sail straight through because the tool is looking for explicit keywords, not contextual meaning.

Inaccurate moderation doesn't reduce your liability risk. It redistributes it and erodes the community trust you're trying to protect at the same time.

One-Size Moderation Doesn't Fit Australian English

Most of the moderation tools available in the market were built for US or UK English contexts. They were trained on different linguistic norms, different cultural references, and different community standards. Applying them to ANZ audiences without significant customisation is structurally flawed from the start.

Australian and New Zealand English has its own colloquialisms, its own regional slang, and its own community norms around what counts as robust debate versus genuine harm. A tool that doesn't understand that distinction will keep getting it wrong, no matter how many times you adjust the settings.

The stakes of getting this wrong are already high. Research into Australian social media discourse around major news events has documented significant polarisation and the rapid spread of misinformation - patterns that make nuanced moderation not just desirable but necessary. Your comment section is not a neutral space. It is an active environment where harmful narratives can take hold quickly if moderation is clumsy or absent.

Effective moderation for ANZ news publishers cannot be a generic configuration. It requires tools that can learn the specific audience, adapt to the specific masthead, and be guided by the people who know both best.

Human Intelligence, AI Efficiency

The answer is not AI replacing your moderators. It is also not your moderators drowning in manual review without support. The answer is human-in-the-loop moderation - AI doing the heavy lifting, humans retaining control over the decisions that matter.

For an ANZ news publisher, that looks like this:

  • Customisable by organisation. Your moderation system should be trained on your specific editorial guidelines, your community norms, and the tone of your masthead - not a generic content policy written for a different market.
  • Flexible moderation modes. Manual, semi-automatic, and fully automatic options, so your editors can dial AI autonomy up or down depending on the story, the moment, and the risk level.
  • Full visibility and auditability. Your moderators should be able to see exactly what the system actioned and why - and correct it when it gets something wrong.
  • A learning loop. Every correction your team makes should feed back into the system, making it smarter about your specific community over time.

This approach does more than manage legal risk. It actively builds the kind of community trust that news publishers cannot afford to lose. Trust in news media among Australian audiences is already under significant pressure. Clumsy moderation - or no moderation at all - accelerates that erosion. Intelligent, responsive moderation can begin to reverse it.

The Voller Risk Is Real - But So Is the Risk of Getting Moderation Wrong

The Voller verdict created genuine legal exposure for Australian and New Zealand news publishers. That is not in dispute. But the response cannot be to abandon your audience or deploy blunt tools that cause as many problems as they solve.

Publishers who invest in moderation that is intelligent, customisable, and built around the realities of ANZ audiences will be better positioned - legally, editorially, and commercially. They will be able to keep comment sections open, maintain community trust, and demonstrate the kind of active, considered oversight that the law now requires.

Moderation done well is not just a compliance function. It is a competitive advantage. The publishers who recognise that first will be the ones still building audiences while others are still switching the lights off.

Join our newsletter

Get the latest insights on community trends, brand engagement, straight to your inbox. Sign up to stay informed and make smarter decisions.