Why the Online Harms Act should scare Canadians
Cheryl Bowman, The Rural Alberta Report
November 21, 2025

Canadian Politcs
A Warning from the UK: Canada’s Online Harms Bill Risks Silencing Free Speech and Criminalizing Thoughts
Canadians should be deeply alarmed by Bill C-63. While pitched as a way to protect people — especially children — from online harm, its contours suggest something far more chilling: a regulatory regime that could punish thoughts, penalize speech, and encourage a culture of surveillance. Worse still, it draws unnervingly from the UK’s experience, where arrests for online “offensive content” have exploded.
In the United Kingdom, police have made more than 12,000 arrests in 2023 for communications deemed “grossly offensive” or “menacing”. That works out to roughly 33 arrests per day. Many of the people arrested are ordinary citizens — not violent criminals — but individuals whose words, jokes or offhand comments were judged to cause “anxiety” or “inconvenience.” Civil liberties groups warn that vague legal terms are being wielded to police dissent and punish expression, not to counter genuine threats.
This is not a distant risk for Canada. Bill C-63 would introduce a new “peace bond” for people who are thought likely to commit a hate-related crime. Under this provision, even without any past conviction, a person could be subject to judicially imposed conditions — potentially for years — based on what they might do, rather than what they have done. That looks dangerously like pre-crime.
Moreover, C-63 would significantly raise the stakes for hateful content. The bill proposes amendments to the Criminal Code to expand hate-crime offences and increase maximum sentences — including life imprisonment for the gravest offences. This raises a chilling question: could mere expression, especially controversial or politically fraught speech, land someone facing extreme penalties?
Yet this heavy-handedness comes on top of legal tools that already exist. Canada’s Criminal Code already punishes hate propaganda. In addition, the Bill empowers the Canadian Human Rights Commission and Tribunal to take up online hate speech — a revival of Section 13-style powers. Under this regime, individuals could file human rights complaints alleging hateful communication, and the tribunal could impose monetary fines. That raises real concerns about weaponizing the law: people could be turned in for what others suspect they believe, or for posts that a stranger finds offensive.
Some of the most troubling parts of Bill C-63 resemble the UK’s railroads of speech enforcement. In the UK, the “non-crime hate incident” framework allows police to record and retain incidents even when no crime occurred — including for children. This “recording” of perceived future risk can have lasting reputational consequences, long after the supposed event. Canada’s new peace bond mechanism similarly targets people based on risk, not action.
There is also an unsettling incentive structure in the bill. Under Part 3, the Canadian Human Rights Tribunal would hear complaints about hateful telecommunications and could award “monetary rewards” of up to $20,000. That potentially creates a perverse marketplace: when the financial reward is tied to filing a complaint, the system could encourage false, frivolous or vindictive claims.
Why should any of this matter to everyday Canadians? Because the people most at risk are ordinary users: parents making jokes in private WhatsApp groups, students posting political commentary, or even someone sharing a controversial meme. These are not high-risk terrorists or violent extremists — but the broad and vague definitions in C-63 could ensnare them. Once people fear being reported or prosecuted, they self-censor. Democracy suffers when speech becomes a liability.
Worse, the burden is being placed not just on individuals, but on platforms. Bill C-63 would create a Digital Safety Commission with broad powers to demand transparency, penalize platforms, and force removal of content within 24 hours of a “flag.” That urgency may encourage platforms to err on the side of over-removal — especially when fines for non-compliance could reach 6 percent of gross global revenue or $10 million. The result? A regulatory ratchet that incentivizes censorship.
It is worth remembering: Canada already has strong legal protections for children online. The Criminal Code prohibits producing, distributing, or possessing child sexual abuse material. Reporting regimes for extreme content already exist. The question is not whether we need to protect people from serious online harms — but whether the solution should trade away civil liberties on the altar of safety.
Canada doesn’t need to replicate the UK’s free-speech crackdown. The statistics from Britain — tens of thousands of arrests over trivial or ambiguous speech — should sound like an alarm bell. Bill C-63, in its current form, risks turning that alarm into action here. We must push back: we need robust online safety, yes, but not at the cost of turning citizens into suspects. We need amendments that constrain pre-crime bonds, limit financial incentives for reporting, and clarify definitions so that only real, imminent harms are penalized. Otherwise, we risk creating a digital regime in which fear of being reported — or worse, prosecuted — chills speech and stifles dissent.
Bill C-63 is not a child-protection measure. It is a sweeping framework that gives the government the power to decide which voices belong in the public square and which ones can be pushed out. By branding dissent as “harm,” it allows authorities to suppress any counter-narrative that challenges their preferred storyline. Canadians should recognize this for what it is: a censorship bill dressed in the language of safety.








