June 13, 2025
Banned from Facebook – how come?
Tania Waikato is Special Counsel at Tamaki Chambers, a distinguished Māori lawyer of Ngāti Awa, Ngāti Tūwharetoa, Ngāi Tūhoe, Ngāti Hine, and Te Whakatōhea descent, with over 20 years’ experience specializing in Māori legal issues, constitutional and employment law, and a published academic with a First Class Honours LLM from the University of Auckland.
Aotearoa New Zealand is once again grappling with questions around social media bans, particularly on Facebook, as concerns grow over online hate, misinformation, and foreign influence in politics. Recent discussions have highlighted calls for greater accountability of tech platforms, while critics warn of the risks to free speech and democratic participation. Facebook, now under the parent company Meta, has faced repeated criticism globally for how it handles harmful content. In New Zealand, the issue gained major traction after events such as:
- The Christchurch mosque attacks in 2019, which were livestreamed on Facebook.
- The spread of COVID-19 vaccine misinformation.
- Online harassment and threats targeting Māori, Pasifika, and minority communities.
- Political manipulation through targeted advertising and foreign-sponsored disinformation.
This has prompted discussions about whether bans-either of individual users, certain groups, or even platform-wide restrictions-are justified or necessary.
User Bans: Facebook regularly suspends or removes users who violate its policies. This includes hate speech, incitement to violence, harassment, or misinformation.
Group and Page Removals: Pages or groups promoting conspiracy theories or harmful ideologies (e.g. QAnon, white supremacy) have been banned globally, including in NZ.
Government or Corporate Pushes for Bans: Some have advocated for regulatory frameworks that could see public figures, politicians, or misinformation super-spreaders de-platformed in extreme cases.
Self-Exits or Cultural Boycotts: In a separate movement, some Māori and Pacific organisations have opted to limit their use of Facebook, citing the platform’s failure to protect Indigenous voices and cultural safety.
For many Māori, Facebook has been both a powerful tool for activism, language revitalisation (te reo Māori), and community building-and a space where racism and online harm are rampant. Calls have been made for:
- Stronger moderation in te reo Māori.
- Hate speech monitoring with cultural understanding.
- Support for Indigenous-led digital spaces as alternatives.
Some Māori digital leaders argue that the power to ban or silence must not rest solely with corporate algorithms, especially when Māori voices risk being unfairly flagged or removed.
New Zealand’s laws around digital platforms are still evolving. The Harmful Digital Communications Act and upcoming Media and Communications Bill aim to place more responsibility on tech companies-but critics say enforcement is slow.
Civil liberties groups, meanwhile, warn that bans must not become a shortcut to censorship. They argue that:
- Banning people from Facebook without due process can violate rights to expression.
- Tech platforms often lack cultural context and transparency in enforcement.
- There is a risk of suppressing minority voices, especially those critical of government or mainstream views.





